Search (126 results, page 7 of 7)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Scott, M.L.: Dewey Decimal Classification, 21st edition : a study manual and number building guide (1998) 0.00
    0.002124145 = product of:
      0.00849658 = sum of:
        0.00849658 = weight(_text_:information in 1454) [ClassicSimilarity], result of:
          0.00849658 = score(doc=1454,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 1454, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1454)
      0.25 = coord(1/4)
    
    Content
    This work is a comprehensive guide to Edition 21 of the Dewey Decimal Classification (DDC 21). The previous edition was edited by John Phillip Comaromi, who also was the editor of DDC 20 and thus was able to impart in its pages information about the inner workings of the Decimal Classification Editorial Policy Committee, which guides the Classification's development. The manual begins with a brief history of the development of Dewey Decimal Classification (DDC) up to this edition and its impact internationally. It continues on to a review of the general structure of DDC and the 21st edition in particular, with emphasis on the framework ("Hierarchical Order," "Centered Entries") that aids the classifier in its use. An extensive part of this manual is an in-depth review of how DDC is updated with each edition, such as reductions and expansions, and detailed lists of such changes in each table and class. Each citation of a change indicates the previous location of the topic, usually in parentheses but also in textual explanations ("moved from 248.463"). A brief discussion of the topic moved or added provides substance to what otherwise would be lists of numbers. Where the changes are so dramatic that a new class or division structure has been developed, Comparative and Equivalence Tables are provided in volume 1 of DDC 21 (such as Life sciences in 560-590); any such list in this manual would only be redundant. In these cases, the only references to changes in this work are those topics that were moved from other classes. Besides these citations of changes, each class is introduced with a brief background discussion about its development or structure or both to familiarize the user with it. A new aspect in this edition of the DDC study manual is that it is combined with Marty Bloomberg and Hans Weber's An Introduction to Classification and Number Building in Dewey (Libraries Unlimited, 1976) to provide a complete reference for the application of DDC. Detailed examples of number building for each class will guide the classifier through the process that results in classifications for particular works within that class. In addition, at the end of each chapter, lists of book summaries are given as exercises in number analysis, with Library of Congress-assigned classifications to provide benchmarks. The last chapter covers book, or author, numbers, which-combined with the classification and often the date-provide unique call numbers for circulation and shelf arrangement. Guidelines in the application of Cutter tables and Library of Congress author numbers complete this comprehensive reference to the use of DDC 21. As with all such works, this was a tremendous undertaking, which coincided with the author completing a new edition of Conversion Tables: LC-Dewey, Dewey-LC (Libraries Unlimited, forthcoming). Helping hands are always welcome in our human existence, and this book is no exception. Grateful thanks are extended to Jane Riddle, at the NASA Goddard Space Flight Center Library, and to Darryl Hines, at SANAD Support Technologies, Inc., for their kind assistance in the completion of this study manual.
    Footnote
    Rez. in: Managing information 6(1999) no.2, S.49 (J. Bowman)
  2. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    0.0018582398 = product of:
      0.007432959 = sum of:
        0.007432959 = weight(_text_:information in 468) [ClassicSimilarity], result of:
          0.007432959 = score(doc=468,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.083984874 = fieldWeight in 468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    Series
    Cooperative information systems
  3. Booth, P.F.: Indexing : the manual of good practice (2001) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 1968) [ClassicSimilarity], result of:
          0.006866273 = score(doc=1968,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 1968, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=1968)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
  4. Batley, S.: Classification in theory and practice (2005) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 1170) [ClassicSimilarity], result of:
          0.006866273 = score(doc=1170,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 1170, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=1170)
      0.25 = coord(1/4)
    
    Abstract
    This book examines a core topic in traditional librarianship: classification. Classification has often been treated as a sub-set of cataloguing and indexing with relatively few basic textbooks concentrating solely an the theory and practice of classifying resources. This book attempts to redress the balance somewhat. The aim is to demystify a complex subject, by providing a sound theoretical underpinning, together with practical advice and promotion of practical skills. The text is arranged into five chapters: Chapter 1: Classification in theory and practice. This chapter explores theories of classification in broad terms and then focuses an the basic principles of library classification, introducing readers to technical terminology and different types of classification scheme. The next two chapters examine individual classification schemes in depth. Each scheme is explained using frequent examples to illustrate basic features. Working through the exercises provided should be enjoyable and will enable readers to gain practical skills in using the three most widely used general library classification schemes: Dewey Decimal Classification, Library of Congress Classification and Universal Decimal Classification. Chapter 2: Classification schemes for general collections. Dewey Decimal and Library of Congress classifications are the most useful and popular schemes for use in general libraries. The background, coverage and structure of each scheme are examined in detail in this chapter. Features of the schemes and their application are illustrated with examples. Chapter 3: Classification schemes for specialist collections. Dewey Decimal and Library of Congress may not provide sufficient depth of classification for specialist collections. In this chapter, classification schemes that cater to specialist needs are examined. Universal Decimal Classification is superficially very much like Dewey Decimal, but possesses features that make it a good choice for specialist libraries or special collections within general libraries. It is recognised that general schemes, no matter how deep their coverage, may not meet the classification needs of some collections. An answer may be to create a special classification scheme and this process is examined in detail here. Chapter 4: Classifying electronic resources. Classification has been reborn in recent years with an increasing need to organise digital information resources. A lot of work in this area has been conducted within the computer science discipline, but uses basic principles of classification and thesaurus construction. This chapter takes a broad view of theoretical and practical issues involved in creating classifications for digital resources by examining subject trees, taxonomies and ontologies. Chapter 5: Summary. This chapter provides a brief overview of concepts explored in depth in previous chapters. Development of practical skills is emphasised throughout the text. It is only through using classification schemes that a deep understanding of their structure and unique features can be gained. Although all the major schemes covered in the text are available an the Web, it is recommended that hard-copy versions are used by those wishing to become acquainted with their overall structure. Recommended readings are supplied at the end of each chapter and provide useful sources of additional information and detail. Classification demands precision and the application of analytical skills, working carefully through the examples and the practical exercises should help readers to improve these faculties. Anyone who enjoys cryptic crosswords should recognise a parallel: classification often involves taking the meaning of something apart and then reassembling it in a different way.
    Footnote
    - Similarly, there is very little space provided to the thorny issue of subject analysis, which is at the conceptual core of classification work of any kind. The author's recommendations are practical, and do not address the subjective nature of this activity, nor the fundamental issues of how the classification schemes are interpreted and applied in diverse contexts, especially with respect to what a work "is about." - Finally, there is very little about practical problem solving - stories from the trenches as it were. How does a classifier choose one option over another when both seem plausible, even given that he or she has done a user and task analysis? How do classifiers respond to rapid or seemingly impulsive change? How do we evaluate the products of our work? How do we know what is the "correct" solution, even if we work, as most of us do, assuming that this is an elusive goal, but we try our best anyway? The least satisfying section of the book is the last, where the author proposes some approaches to organizing electronic resources. The suggestions seem to be to more or less transpose and adapt skills and procedures from the world of organizing books an shelves to the virtual hyperlinked world of the Web. For example, the author states (p. 153-54): Precise classification of documents is perhaps not as crucial in the electronic environment as it is in the traditional library environment. A single document can be linked to and retrieved via several different categories to allow for individual needs and expertise. However, it is not good practice to overload the system with links because that will affect its use. Effort must be made to ensure that inappropriate or redundant links are not included. The point is well taken: too muck irrelevant information is not helpful. At the same time an important point concerning the electronic environment has been overlooked as well: redundancy is what relieves the user from making precise queries or knowing the "right" place for launching a search, and redundancy is what is so natural an the Web. These are small objections, however. Overall the book is a carefully crafted primer that gives the student a strong foundation an which to build further understanding. There are well-chosen and accessible references for further reading. I world recommend it to any instructor as an excellent starting place for deeper analysis in the classroom and to any student as an accompanying text to the schedules themselves."
    Series
    Information professional series
  5. Bertram, J.: Einführung in die inhaltliche Erschließung : Grundlagen - Methoden - Instrumente (2005) 0.00
    0.001213797 = product of:
      0.004855188 = sum of:
        0.004855188 = weight(_text_:information in 210) [ClassicSimilarity], result of:
          0.004855188 = score(doc=210,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.054858685 = fieldWeight in 210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=210)
      0.25 = coord(1/4)
    
    Abstract
    Die inhaltliche Erschließung geht von der Frage aus, wie man sich gezielt und schnell einen Zugang zu Dokumentinhalten und eine Orientierung über sie verschaffen kann. Ein Dokument kann dabei alles sein, was als Träger inhaltlicher Daten in Betracht kommt: eine Patentschrift oder ein Plakat, ein Fernsehbeitrag oder ein Musikstück, ein musealer Gegenstand oder eine Internetquelle, ein Buch, ein Zeitungsartikel und vieles mehr. Als Dozentin für inhaltliche Erschließung am Potsdamer Institut für Information und Dokumentation (IID), das wissenschaftliche Dokumentare ausbildet, habe ich mich mit obiger Frage vier Jahre lang beschäftigt. Das vorliegende Buch stellt eine Essenz meiner Unterrichtsmaterialien dar, die ich im Laufe dieser Zeit entwickelt habe. Mit der Erstellung der Materialien wollte ich zum einen dem Mangel an aktueller deutschsprachiger Literatur zum Thema 'Inhaltserschließung' begegnen und außerdem das Lehrgeschehen so vollständig wie möglich dokumentieren. Die Zusammenfassung meiner Skripte zu einer Buchpublikation ist eine Reaktion auf das häufig geäußerte Bedürfnis der Kursteilnehmer, neben all den Loseblattsammlungen auch ein "richtiges Buch" erstehen zu können. Die Publikation richtet sich aber ebenso an mein derzeitiges Klientel, die Studentinnen und Studenten des Fachhochschulstudiengangs Informationsberufe in Eisenstadt (Österreich). Darüber hinaus ist sie an jegliche Teilnehmer von Studiengängen im Bibliotheks-, Informations- und Dokumentationsbereich adressiert. Zudem verbinde ich mit ihr die Hoffnung, daß sie auch der einen oder anderen lehrenden Person als ein Hilfsmittel dienen möge. Bei der Erstellung der Lehrmaterialien ging es mir nicht darum, das Rad neu zu erfinden. Vielmehr war es meine Absicht, die bereits vorhandenen Räder zusammenzutragen und in einem einheitlichen Konzept zu vereinen. Der Literaturpool, aus dem sich die Publikation speist, besteht zunächst aus nationalen und internationalen Normen, die den Bereich der Inhaltserschließung berühren. Dazu kommen die nicht eben zahlreichen Monographien und Aufsätze im deutschen Sprachraum sowie einige englischsprachige Buchpublikationen zum Thema. Schließlich habe ich Zeitschriftenartikel aus dem Bibliotheks- und Dokumentationsbereich der letzten vier Jahre einbezogen, mehrheitlich ebenfalls aus dem deutschen Sprachraum, vereinzelt aus dem anglo-amerikanischen. Ein wesentliches Anliegen dieses Buches ist es, Licht in das terminologische Dunkel zu bringen, das sich dem interessierten Leser bei intensivem Literaturstudium darbietet. Denn der faktische Sprachgebrauch weicht häufig vom genormten ab, die Bibliothekare verwenden andere Ausdrücke als die Dokumentare und Informationswissenschaftler, unterschiedliche Autoren sprechen unterschiedliche Sprachen und auch die Normen selbst sind längst nicht immer eindeutig. Ich habe mich um terminologische Konsistenz in der Weise bemüht, daß ich alternative deutsche Ausdrucksformen zu einem Terminus in Fußnoten aufführe. Dort finden sich gegebenenfalls auch die entsprechenden englischen Bezeichnungen, wobei ich mich in dieser Hinsicht überwiegend an dem genormten englischen Vokabular orientiert habe.
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 56(2005) H.7, S.395-396 (M. Ockenfeld): "... Das Buch ist der z. Band in einer vom International Network for Terminology (TermNet), einer Gruppe von vierzehn Fachleuten der Terminologiearbeit aus acht Ländern, herausgegebenen Reihe. Ein Anliegen der Autorin, das sie in ihrem Vorwort formuliert, ist es denn auch "Licht in das terminologische Dunkel zu bringen", in das man beim intensiven Literaturstudium allzu leicht gerät, weil der faktische Sprachgebrauch häufig vom genormten abweicht und außerdem Bibliothekare, Dokumentare und Informationswissenschaftler Ausdrücke unterschiedlich gebrauchen. ... Der didaktisch gut aufbereitete Stoff wird sehr verständlich, präzise und mit spürbarer Begeisterung beschrieben. Doch das Buch ist auch wegen seiner sorgfältigen typographischen Gestaltung ein Lesevergnügen, vor allem für diejenigen, die die herkömmliche Rechtschreibung gewohnt sind. Es kann der angestrebten Zielgruppe, Teilnehmer und Lehrende von Hochschulstudiengängen im Bibliotheks-, Informations- und Dokumentationsbereich, als kompaktes Lehr- und Arbeitsbuch für die Grundlagen der Inhaltserschließung nachdrücklich empfohlen werden."
  6. Oberhauser, O.: Automatisches Klassifizieren : Entwicklungsstand - Methodik - Anwendungsbereiche (2005) 0.00
    0.0010728551 = product of:
      0.0042914203 = sum of:
        0.0042914203 = weight(_text_:information in 38) [ClassicSimilarity], result of:
          0.0042914203 = score(doc=38,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.048488684 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
      0.25 = coord(1/4)
    
    Footnote
    Die am Anfang des Werkes gestellte Frage, ob »die Techniken des automatischen Klassifizierens heute bereits so weit [sind], dass damit grosse Mengen elektronischer Dokumente [-] zufrieden stellend erschlossen werden können? « (S. 13), beantwortet der Verfasser mit einem eindeutigen »nein«, was Salton und McGills Aussage von 1983, »daß einfache automatische Indexierungsverfahren schnell und kostengünstig arbeiten, und daß sie Recall- und Precisionwerte erreichen, die mindestens genauso gut sind wie bei der manuellen Indexierung mit kontrolliertem Vokabular « (Gerard Salton und Michael J. McGill: Information Retrieval. Hamburg u.a. 1987, S. 64 f.) kräftig relativiert. Über die Gründe, warum drei der großen Projekte nicht weiter verfolgt werden, will Oberhauser nicht spekulieren, nennt aber mangelnden Erfolg, Verlagerung der Arbeit in den beteiligten Institutionen sowie Finanzierungsprobleme als mögliche Ursachen. Das größte Entwicklungspotenzial beim automatischen Erschließen großer Dokumentenmengen sieht der Verfasser heute in den Bereichen der Patentund Mediendokumentation. Hier solle man im bibliothekarischen Bereich die Entwicklung genau verfolgen, da diese »sicherlich mittelfristig auf eine qualitativ zufrieden stellende Vollautomatisierung« abziele (S. 146). Oberhausers Darstellung ist ein rundum gelungenes Werk, das zum Handapparat eines jeden, der sich für automatische Erschließung interessiert, gehört."

Years

Languages

  • e 89
  • d 37

Types

  • m 109
  • s 12
  • a 9
  • el 2
  • ? 1
  • More… Less…

Subjects

Classifications