Search (261 results, page 1 of 14)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.10
    0.0962028 = product of:
      0.48101398 = sum of:
        0.48101398 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.48101398 = score(doc=1826,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Understanding metadata (2004) 0.09
    0.086758465 = product of:
      0.21689616 = sum of:
        0.1512439 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
          0.1512439 = score(doc=2686,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.46979034 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.065652266 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
          0.065652266 = score(doc=2686,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.30952093 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
      0.4 = coord(2/5)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  3. Crane, G.: What do you do with a million books? (2006) 0.09
    0.08617601 = product of:
      0.21544002 = sum of:
        0.07562195 = weight(_text_:objects in 1180) [ClassicSimilarity], result of:
          0.07562195 = score(doc=1180,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.23489517 = fieldWeight in 1180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=1180)
        0.13981807 = weight(_text_:books in 1180) [ClassicSimilarity], result of:
          0.13981807 = score(doc=1180,freq=10.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.477611 = fieldWeight in 1180, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=1180)
      0.4 = coord(2/5)
    
    Abstract
    The Greek historian Herodotus has the Athenian sage Solon estimate the lifetime of a human being at c. 26,250 days (Herodotus, The Histories, 1.32). If we could read a book on each of those days, it would take almost forty lifetimes to work through every volume in a single million book library. The continuous tradition of written European literature that began with the Iliad and Odyssey in the eighth century BCE is itself little more than a million days old. While libraries that contain more than one million items are not unusual, print libraries never possessed a million books of use to any one reader. The great libraries that took shape in the nineteenth and twentieth centuries were meta-structures, whose catalogues and finding aids allowed readers to create their own customized collections, building on the fixed classification schemes and disciplinary structures that took shape in the nineteenth century. The digital libraries of the early twenty-first century can be searched and their contents transmitted around the world. They can contain time-based media, images, quantitative data, and a far richer array of content than print, with visualization technologies blurring the boundaries between library and museum. But our digital libraries remain filled with digital incunabula - digital objects whose form remains firmly rooted in traditions of print, with HTML and PDF largely mimicking the limitations of their print predecessors. Vast collections based on image books - raw digital pictures of books with searchable but uncorrected text from OCR - could arguably retard our long-term progress, reinforcing the hegemony of structures that evolved to minimize the challenges of a world where paper was the only medium of distribution and where humans alone could read. Already the books in a digital library are beginning to read one another and to confer among themselves before creating a new synthetic document for review by their human readers.
  4. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.08
    0.07930445 = product of:
      0.19826111 = sum of:
        0.16543499 = weight(_text_:books in 3608) [ClassicSimilarity], result of:
          0.16543499 = score(doc=3608,freq=14.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.565117 = fieldWeight in 3608, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.032826133 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
          0.032826133 = score(doc=3608,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
      0.4 = coord(2/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Object
    Google books
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  5. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.08
    0.078781635 = product of:
      0.19695409 = sum of:
        0.15005767 = weight(_text_:objects in 1195) [ClassicSimilarity], result of:
          0.15005767 = score(doc=1195,freq=14.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.4661057 = fieldWeight in 1195, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.04689641 = weight(_text_:books in 1195) [ClassicSimilarity], result of:
          0.04689641 = score(doc=1195,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.1601956 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
      0.4 = coord(2/5)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.08
    0.07696223 = product of:
      0.38481116 = sum of:
        0.38481116 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.38481116 = score(doc=230,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.2 = coord(1/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Priss, U.: Faceted knowledge representation (1999) 0.08
    0.07591366 = product of:
      0.18978414 = sum of:
        0.1323384 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
          0.1323384 = score(doc=2654,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.05744573 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
          0.05744573 = score(doc=2654,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.2708308 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  8. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.07
    0.06778965 = product of:
      0.16947412 = sum of:
        0.10694559 = weight(_text_:objects in 4449) [ClassicSimilarity], result of:
          0.10694559 = score(doc=4449,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.33219194 = fieldWeight in 4449, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=4449)
        0.06252854 = weight(_text_:books in 4449) [ClassicSimilarity], result of:
          0.06252854 = score(doc=4449,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.21359414 = fieldWeight in 4449, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=4449)
      0.4 = coord(2/5)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
  9. Books in print plus with Book reviews plus : BIP + REV (1993) 0.05
    0.054151308 = product of:
      0.27075654 = sum of:
        0.27075654 = weight(_text_:books in 1302) [ClassicSimilarity], result of:
          0.27075654 = score(doc=1302,freq=6.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.9248898 = fieldWeight in 1302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.078125 = fieldNorm(doc=1302)
      0.2 = coord(1/5)
    
    Content
    Erscheint monatlich und enthält als CD-ROM Ausgabe: Books in print, Paperbound Books in print, Forthcoming BIP, Children's BIP sowie 200000 Volltext-Buchbesprechungen
  10. Snowhill, L.: E-books and their future in academic libraries (2001) 0.05
    0.053607058 = product of:
      0.2680353 = sum of:
        0.2680353 = weight(_text_:books in 1218) [ClassicSimilarity], result of:
          0.2680353 = score(doc=1218,freq=12.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.9155941 = fieldWeight in 1218, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1218)
      0.2 = coord(1/5)
    
    Abstract
    The University of California's California Digital Library (CDL) formed an Ebook Task Force in August 2000 to evaluate academic libraries' experiences with electronic books (e-books), investigate the e-book market, and develop operating guidelines, principles and potential strategies for further exploration of the use of e-books at the University of California (UC). This article, based on the findings and recommendations of the Task Force Report, briefly summarizes task force findings, and outlines issues and recommendations for making e-books viable over the long term in the academic environment, based on the long-term goals of building strong research collections and providing high level services and collections to its users.
    Object
    E-books
  11. Markey, K.: ¬The online library catalog : paradise lost and paradise regained? (2007) 0.05
    0.04835267 = product of:
      0.12088168 = sum of:
        0.0661692 = weight(_text_:objects in 1172) [ClassicSimilarity], result of:
          0.0661692 = score(doc=1172,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.20553327 = fieldWeight in 1172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1172)
        0.054712474 = weight(_text_:books in 1172) [ClassicSimilarity], result of:
          0.054712474 = score(doc=1172,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.18689486 = fieldWeight in 1172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1172)
      0.4 = coord(2/5)
    
    Abstract
    The impetus for this essay is the library community's uncertainty regarding the present and future direction of the library catalog in the era of Google and mass digitization projects. The uncertainty is evident at the highest levels. Deanna Marcum, Associate Librarian for Library Services at the Library of Congress (LC), is struck by undergraduate students who favor digital resources over the online library catalog because such resources are available at anytime and from anywhere (Marcum, 2006). She suggests that "the detailed attention that we have been paying to descriptive cataloging may no longer be justified ... retooled catalogers could give more time to authority control, subject analysis, [and] resource identification and evaluation" (Marcum, 2006, 8). In an abrupt about-face, LC terminated series added entries in cataloging records, one of the few subject-rich fields in such records (Cataloging Policy and Support Office, 2006). Mann (2006b) and Schniderman (2006) cite evidence of LC's prevailing viewpoint in favor of simplifying cataloging at the expense of subject cataloging. LC commissioned Karen Calhoun (2006) to prepare a report on "revitalizing" the online library catalog. Calhoun's directive is clear: divert resources from cataloging mass-produced formats (e.g., books) to cataloging the unique primary sources (e.g., archives, special collections, teaching objects, research by-products). She sums up her rationale for such a directive, "The existing local catalog's market position has eroded to the point where there is real concern for its ability to weather the competition for information seekers' attention" (p. 10). At the University of California Libraries (2005), a task force's recommendations parallel those in Calhoun report especially regarding the elimination of subject headings in favor of automatically generated metadata. Contemplating these events prompted me to revisit the glorious past of the online library catalog. For a decade and a half beginning in the early 1980s, the online library catalog was the jewel in the crown when people eagerly queued at its terminals to find information written by the world's experts. I despair how eagerly people now embrace Google because of the suspect provenance of the information Google retrieves. Long ago, we could have added more value to the online library catalog but the only thing we changed was the catalog's medium. Our failure to act back then cost the online catalog the crown. Now that the era of mass digitization has begun, we have a second chance at redesigning the online library catalog, getting it right, coaxing back old users, and attracting new ones. Let's revisit the past, reconsidering missed opportunities, reassessing their merits, combining them with new directions, making bold decisions and acting decisively on them.
  12. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.05
    0.0481014 = product of:
      0.24050699 = sum of:
        0.24050699 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
          0.24050699 = score(doc=4388,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2 = coord(1/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  13. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.05
    0.0481014 = product of:
      0.24050699 = sum of:
        0.24050699 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
          0.24050699 = score(doc=5669,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.46834838 = fieldWeight in 5669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.2 = coord(1/5)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  14. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.05
    0.047677346 = product of:
      0.11919336 = sum of:
        0.07816069 = weight(_text_:books in 1291) [ClassicSimilarity], result of:
          0.07816069 = score(doc=1291,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.04103267 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
          0.04103267 = score(doc=1291,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.19345059 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
      0.4 = coord(2/5)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  15. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.05
    0.045373168 = product of:
      0.22686584 = sum of:
        0.22686584 = weight(_text_:objects in 469) [ClassicSimilarity], result of:
          0.22686584 = score(doc=469,freq=8.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.7046855 = fieldWeight in 469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.2 = coord(1/5)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  16. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.04
    0.042368535 = product of:
      0.105921336 = sum of:
        0.06684099 = weight(_text_:objects in 1182) [ClassicSimilarity], result of:
          0.06684099 = score(doc=1182,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.20761997 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.039080344 = weight(_text_:books in 1182) [ClassicSimilarity], result of:
          0.039080344 = score(doc=1182,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.13349634 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.4 = coord(2/5)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  17. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.04
    0.04233863 = product of:
      0.10584657 = sum of:
        0.08122697 = weight(_text_:books in 1184) [ClassicSimilarity], result of:
          0.08122697 = score(doc=1184,freq=6.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.27746695 = fieldWeight in 1184, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.0246196 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
          0.0246196 = score(doc=1184,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.116070345 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
      0.4 = coord(2/5)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  18. "Google Books" darf weitermachen wie bisher : Entscheidung des Supreme Court in den USA (2016) 0.04
    0.038290758 = product of:
      0.19145378 = sum of:
        0.19145378 = weight(_text_:books in 2923) [ClassicSimilarity], result of:
          0.19145378 = score(doc=2923,freq=12.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.6539958 = fieldWeight in 2923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2923)
      0.2 = coord(1/5)
    
    Abstract
    Der Internet-Riese darf sein Projekt "Google Books" wie gehabt fortsetzen. Der Oberste US-Gerichtshof lehnte die von einer Autorenvereingung geforderte Revision ab. Google teste mit seinem Projekt zwar die Grenzen der Fairness aus, handele aber rechtens, sagten die Richter.
    Content
    " Im Streit mit Google um Urheberrechte ist eine Gruppe von Buchautoren am Obersten US-Gericht gescheitert. Der Supreme Court lehnte es ab, die google-freundliche Entscheidung eines niederen Gerichtes zur Revision zuzulassen. In dem Fall geht es um die Online-Bibliothek "Google Books", für die der kalifornische Konzern Gerichtsunterlagen zufolge mehr als 20 Millionen Bücher digitalisiert hat. Durch das Projekt können Internet-Nutzer innerhalb der Bücher nach Stichworten suchen und die entsprechenden Textstellen lesen. Die drei zuständigen Richter entschieden einstimmig, dass in dem Fall zwar die Grenzen der Fairness ausgetestet würden, aber das Vorgehen von Google letztlich rechtens sei. Entschädigungen in Milliardenhöhe gefürchtet Die von dem Interessensverband Authors Guild angeführten Kläger sahen ihre Urheberrechte durch "Google Books" verletzt. Dazu gehörten auch prominente Künstler wie die Schriftstellerin und Dichterin Margaret Atwood. Google führte dagegen an, die Internet-Bibliothek kurbele den Bücherverkauf an, weil Leser dadurch zusätzlich auf interessante Werke aufmerksam gemacht würden. Google reagierte "dankbar" auf die Entscheidung des Supreme Court. Der Konzern hatte befürchtet, bei einer juristischen Niederlage Entschädigungen in Milliardenhöhe zahlen zu müssen."
    Object
    Google books
    Source
    https://www.tagesschau.de/wirtschaft/google-books-entscheidung-101.html
  19. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.04
    0.038290758 = product of:
      0.19145378 = sum of:
        0.19145378 = weight(_text_:books in 3870) [ClassicSimilarity], result of:
          0.19145378 = score(doc=3870,freq=12.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.6539958 = fieldWeight in 3870, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.2 = coord(1/5)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  20. Faceted classification of information (o.J.) 0.04
    0.037810978 = product of:
      0.18905488 = sum of:
        0.18905488 = weight(_text_:objects in 2653) [ClassicSimilarity], result of:
          0.18905488 = score(doc=2653,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.58723795 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=2653)
      0.2 = coord(1/5)
    
    Abstract
    An explanation of faceted classification meant for people working in knowledge management. An example given for a high-technology company has the fundamental categories Products, Applications, Organizations, People, Domain objects ("technologies applied in the marketplace in which the organization participates"), Events (i.e. time), and Publications.

Years

Languages

  • e 161
  • d 91
  • el 2
  • a 1
  • f 1
  • nl 1
  • More… Less…

Types

  • a 134
  • i 11
  • b 6
  • r 6
  • m 5
  • s 5
  • n 1
  • x 1
  • More… Less…