Search (899 results, page 1 of 45)

  • × type_ss:"m"
  • × year_i:[2000 TO 2010}
  1. Browne, G.; Jermey, J.: Website indexing : enhancing access to information witbin websites (2001) 0.08
    0.08014128 = product of:
      0.2003532 = sum of:
        0.19088061 = weight(_text_:91 in 3914) [ClassicSimilarity], result of:
          0.19088061 = score(doc=3914,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.7387768 = fieldWeight in 3914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.09375 = fieldNorm(doc=3914)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 3914) [ClassicSimilarity], result of:
              0.018945174 = score(doc=3914,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 3914, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3914)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: Online 25(2001) no.6, S.94 ( D.L. Wiley); Internet Reference Services Quarterly 6(2001) no.1, S.91-93 (D.J. Bertuca)
  2. McCrank, L.J.: Historical information science : an emerging unidiscipline (2001) 0.06
    0.05929558 = product of:
      0.09882596 = sum of:
        0.0068111527 = weight(_text_:a in 1242) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1242,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1242, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1242)
        0.079533584 = weight(_text_:91 in 1242) [ClassicSimilarity], result of:
          0.079533584 = score(doc=1242,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1242)
        0.01248123 = product of:
          0.02496246 = sum of:
            0.02496246 = weight(_text_:information in 1242) [ClassicSimilarity], result of:
              0.02496246 = score(doc=1242,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30666938 = fieldWeight in 1242, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1242)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 54(2003) no.1, S.91-92 (L.A. Ennis): "Historical Införmation Science: An Emerging Unidiscipline, the culmination of research and experience begun in the early 1970s, is a massive work in which Lawrence McCrank, Professor of Library and Information Science and Dean of Library and Information Service at Chicago State University, examines, explains, and discusses the interdisciplinary merging of history and information science. Spanning 1,192 pages McCrank argues for a new field of study called Historical Information Science to mesh "equally the subject matter of a historical field of investigation, quantified Social Science and linguistic research methodologies, computer science and technology, and information science . . . " (p. 1). Throughout this bibliographic essay, containing more than 6,000 citations, McCrank demonstrates how history and information science has the potential to compliment euch other. The primary focus of the book is an the access, preservation, and interpretation of historical resources and how information technology affects research methodology in various information settings such as libraries, museums, and archives. The book, however, is highly scholarly and highly theoretical, even philosophical, and not easy to read. Chapters one through five make up the 578 pages of the bibliographic essay portion of the book. Euch chapter is practically a monograph an its own. Although the individual chapters are divided and subdivided into sections the length and complexity of euch chapters combined with the author's verbosity often obscure the chapters' main focus and argument."
    Imprint
    Medford, NJ : Information Today
  3. Kuhlthau, C.C.: Seeking meaning : a process approach to library and information services (2003) 0.05
    0.045495473 = product of:
      0.11373868 = sum of:
        0.01155891 = weight(_text_:a in 4585) [ClassicSimilarity], result of:
          0.01155891 = score(doc=4585,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 4585, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=4585)
        0.102179766 = sum of:
          0.026792523 = weight(_text_:information in 4585) [ClassicSimilarity], result of:
            0.026792523 = score(doc=4585,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.3291521 = fieldWeight in 4585, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.09375 = fieldNorm(doc=4585)
          0.07538725 = weight(_text_:22 in 4585) [ClassicSimilarity], result of:
            0.07538725 = score(doc=4585,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.46428138 = fieldWeight in 4585, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4585)
      0.4 = coord(2/5)
    
    Abstract
    First published in 1993, this book presents a new process approach to library and information services.
    Date
    25.11.2005 18:58:22
  4. Pinker, S.: ¬The blank slate : the modern denial of human nature (2002) 0.04
    0.044538807 = product of:
      0.22269404 = sum of:
        0.22269404 = weight(_text_:91 in 4370) [ClassicSimilarity], result of:
          0.22269404 = score(doc=4370,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.86190623 = fieldWeight in 4370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.109375 = fieldNorm(doc=4370)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Spektrum der Wissenschaft 2003, H.2, S.91-95 (H. Breuer)
  5. Taylor, A.: Engaging with knowledge : emerging concepts in knowledge management (2003) 0.04
    0.041002322 = product of:
      0.1025058 = sum of:
        0.008173384 = weight(_text_:a in 60) [ClassicSimilarity], result of:
          0.008173384 = score(doc=60,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 60, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=60)
        0.09433242 = sum of:
          0.018945174 = weight(_text_:information in 60) [ClassicSimilarity], result of:
            0.018945174 = score(doc=60,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23274569 = fieldWeight in 60, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.09375 = fieldNorm(doc=60)
          0.07538725 = weight(_text_:22 in 60) [ClassicSimilarity], result of:
            0.07538725 = score(doc=60,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.46428138 = fieldWeight in 60, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=60)
      0.4 = coord(2/5)
    
    Date
    2. 2.2003 18:31:22
    Theme
    Information Resources Management
  6. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.04
    0.039877582 = product of:
      0.099693954 = sum of:
        0.06362687 = weight(_text_:91 in 1767) [ClassicSimilarity], result of:
          0.06362687 = score(doc=1767,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.24625893 = fieldWeight in 1767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.036067087 = sum of:
          0.010938003 = weight(_text_:information in 1767) [ClassicSimilarity], result of:
            0.010938003 = score(doc=1767,freq=6.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1343758 = fieldWeight in 1767, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
          0.025129084 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
            0.025129084 = score(doc=1767,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.15476047 = fieldWeight in 1767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
      0.4 = coord(2/5)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
    Im fünften Kapitel "Information Extraction" geht Nohr auf eine Problemstellung ein, die in der Fachwelt eine noch stärkere Betonung verdiente: "Die stetig ansteigende Zahl elektronischer Dokumente macht neben einer automatischen Erschließung auch eine automatische Gewinnung der relevanten Informationen aus diesen Dokumenten wünschenswert, um diese z.B. für weitere Bearbeitungen oder Auswertungen in betriebliche Informationssysteme übernehmen zu können." (S. 103) "Indexierung und Retrievalverfahren" als voneinander abhängige Verfahren werden im sechsten Kapitel behandelt. Hier stehen Relevance Ranking und Relevance Feedback sowie die Anwendung informationslinguistischer Verfahren in der Recherche im Mittelpunkt. Die "Evaluation automatischer Indexierung" setzt den thematischen Schlusspunkt. Hier geht es vor allem um die Oualität einer Indexierung, um gängige Retrievalmaße in Retrievaltest und deren Einssatz. Weiterhin ist hervorzuheben, dass jedes Kapitel durch die Vorgabe von Lernzielen eingeleitet wird und zu den jeweiligen Kapiteln (im hinteren Teil des Buches) einige Kontrollfragen gestellt werden. Die sehr zahlreichen Beispiele aus der Praxis, ein Abkürzungsverzeichnis und ein Sachregister erhöhen den Nutzwert des Buches. Die Lektüre förderte beim Rezensenten das Verständnis für die Zusammenhänge von BID-Handwerkzeug, Wirtschaftsinformatik (insbesondere Data Warehousing) und Künstlicher Intelligenz. Die "Grundlagen der automatischen Indexierung" sollte auch in den bibliothekarischen Studiengängen zur Pflichtlektüre gehören. Holger Nohrs Lehrbuch ist auch für den BID-Profi geeignet, um die mehr oder weniger fundierten Kenntnisse auf dem Gebiet "automatisches Indexieren" schnell, leicht verständlich und informativ aufzufrischen."
  7. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.04
    0.03776508 = product of:
      0.0629418 = sum of:
        0.007367388 = weight(_text_:a in 4401) [ClassicSimilarity], result of:
          0.007367388 = score(doc=4401,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13779864 = fieldWeight in 4401, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.047720153 = weight(_text_:91 in 4401) [ClassicSimilarity], result of:
          0.047720153 = score(doc=4401,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.1846942 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.007854254 = product of:
          0.015708508 = sum of:
            0.015708508 = weight(_text_:information in 4401) [ClassicSimilarity], result of:
              0.015708508 = score(doc=4401,freq=22.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19298252 = fieldWeight in 4401, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4401)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
    Content
    Inhalt: OIL and DAML + OIL: Ontology Languages for the Semantic Web (pages 11-31) / Dieter Fensel, Frank van Harmelen and Ian Horrocks A Methodology for Ontology-Based Knowledge Management (pages 33-46) / York Sure and Rudi Studer Ontology Management: Storing, Aligning and Maintaining Ontologies (pages 47-69) / Michel Klein, Ying Ding, Dieter Fensel and Borys Omelayenko Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema (pages 71-89) / Jeen Broekstra, Arjohn Kampman and Frank van Harmelen Generating Ontologies for the Semantic Web: OntoBuilder (pages 91-115) / R. H. P. Engels and T. Ch. Lech OntoEdit: Collaborative Engineering of Ontologies (pages 117-132) / York Sure, Michael Erdmann and Rudi Studer QuizRDF: Search Technology for the Semantic Web (pages 133-144) / John Davies, Richard Weeks and Uwe Krohn Spectacle (pages 145-159) / Christiaan Fluit, Herko ter Horst, Jos van der Meer, Marta Sabou and Peter Mika OntoShare: Evolving Ontologies in a Knowledge Sharing System (pages 161-177) / John Davies, Alistair Duke and Audrius Stonkus Ontology Middleware and Reasoning (pages 179-196) / Atanas Kiryakov, Kiril Simov and Damyan Ognyanov Ontology-Based Knowledge Management at Work: The Swiss Life Case Studies (pages 197-218) / Ulrich Reimer, Peter Brockhausen, Thorsten Lau and Jacqueline R. Reich Field Experimenting with Semantic Web Tools in a Virtual Organization (pages 219-244) / Victor Iosif, Peter Mika, Rikard Larsson and Hans Akkermans A Future Perspective: Exploiting Peer-To-Peer and the Semantic Web for Knowledge Management (pages 245-264) / Dieter Fensel, Steffen Staab, Rudi Studer, Frank van Harmelen and John Davies Conclusions: Ontology-driven Knowledge Management - Towards the Semantic Web? (pages 265-266) / John Davies, Dieter Fensel and Frank van Harmelen
  8. Sennett, R.: Autorität (2008) 0.04
    0.035992794 = product of:
      0.17996396 = sum of:
        0.17996396 = weight(_text_:91 in 3774) [ClassicSimilarity], result of:
          0.17996396 = score(doc=3774,freq=4.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.69652545 = fieldWeight in 3774, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0625 = fieldNorm(doc=3774)
      0.2 = coord(1/5)
    
    Classification
    Soz B 91 / Autorität
    SBB
    Soz B 91 / Autorität
  9. Kageura, K.: ¬The dynamics of terminology : a descriptive theory of term formation and terminological growth (2002) 0.03
    0.033977963 = product of:
      0.056629937 = sum of:
        0.009010308 = weight(_text_:a in 1787) [ClassicSimilarity], result of:
          0.009010308 = score(doc=1787,freq=56.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 1787, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1787)
        0.039766792 = weight(_text_:91 in 1787) [ClassicSimilarity], result of:
          0.039766792 = score(doc=1787,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.15391183 = fieldWeight in 1787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1787)
        0.007852838 = product of:
          0.015705677 = sum of:
            0.015705677 = weight(_text_:22 in 1787) [ClassicSimilarity], result of:
              0.015705677 = score(doc=1787,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09672529 = fieldWeight in 1787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The discovery of rules for the systematicity and dynamics of terminology creations is essential for a sound basis of a theory of terminology. This quest provides the driving force for the dynamics of terminology in which Dr Kageura demonstrates the interaction of these two factors on a specific corpus of Japanese terminology which, beyond the necessary linguistic circumstances, also has a model character for similar studies. His detailed examination of the relationships between terms and their constituent elements, the relationships among the constituent elements and the type of conceptual combinations used in the construction of the terminology permits deep insights into the systematic thought processes underlying term creation. To compensate for the inherent limitation of a purely descriptive analysis of conceptual patterns, Dr. Kageura offers a quantitative analysis of the patterns of the growth of terminology.
    Content
    PART I: Theoretical Background 7 Chapter 1. Terminology: Basic Observations 9 Chapter 2. The Theoretical Framework for the Study of the Dynamics of Terminology 25 PART II: Conceptual Patterns of Term Formation 43 Chapter 3. Conceptual Patterns of Term Formation: The Basic Descriptive Framework 45 Chapter 4. Conceptual Categories for the Description of Formation Patterns of Documentation Terms 61 Chapter 5. Intra-Term Relations and Conceptual Specification Patterns 91 Chapter 6. Conceptual Patterns of the Formation of Documentation Terms 115 PART III: Quantitative Patterns of Terminological Growth 163 Chapter 7. Quantitative Analysis of the Dynamics of Terminology: A Basic Framework 165 Chapter 8. Growth Patterns of Morphemes in the Terminology of Documentation 183 Chapter 9. Quantitative Dynamics in Term Formation 201 PART IV: Conclusions 247 Chapter 10. Towards Modelling Term Formation and Terminological Growth 249 Appendices 273 Appendix A. List of Conceptual Categories 275 Appendix B. Lists of Intra-Term Relations and Conceptual Specification Patterns 279 Appendix C. List of Terms by Conceptual Categories 281 Appendix D. List of Morphemes by Conceptual Categories 295.
    Date
    22. 3.2008 18:18:53
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.112-113 (L. Bowker): "Terminology is generally understood to be the activity that is concerned with the identification, collection and processing of terms; terms are the lexical items used to describe concepts in specialized subject fields Terminology is not always acknowledged as a discipline in its own right; it is sometimes considered to be a subfield of related disciplines such as lexicography or translation. However, a growing number of researchers are beginning to argue that terminology should be recognized as an autonomous discipline with its own theoretical underpinnings. Kageura's book is a valuable contribution to the formulation of a theory of terminology and will help to establish this discipline as an independent field of research. The general aim of this text is to present a theory of term formation and terminological growth by identifying conceptual regularities in term creation and by laying the foundations for the analysis of terminological growth patterns. The approach used is a descriptive one, which means that it is based an observations taken from a corpus. It is also synchronic in nature and therefore does not attempt to account for the evolution of terms over a given period of time (though it does endeavour to provide a means for predicting possible formation patterns of new terms). The descriptive, corpus-based approach is becoming very popular in terminology circles; however, it does pose certain limitations. To compensate for this, Kageura complements his descriptive analysis of conceptual patterns with a quantitative analysis of the patterns of the growth of terminology. Many existing investigations treat only a limited number of terms, using these for exemplification purposes. Kageura argues strongly (p. 31) that any theory of terms or terminology must be based an the examination of the terminology of a domain (i.e., a specialized subject field) in its entirety since it is only with respect to an individual domain that the concept of "term" can be established. To demonstrate the viability of his theoretical approach, Kageura has chosen to investigate and describe the domain of documentation, using Japanese terminological data. The data in the corpus are derived from a glossary (Wersig and Neveling 1984), and although this glossary is somewhat outdated (a fact acknowledged by the author), the data provided are nonetheless sufficient for demonstrating the viability of the approach, which can later be extended and applied to other languages and domains.
    Unlike some terminology researchers, Kageura has been careful not to overgeneralize the applicability of his work, and he points out the limitations of his study, a number of which are summarized an pages 254-257. For example, Kageura acknowledges that his contribution should properly be viewed as a theory of term formation and terminological growth in the field of documentation Moreover, Kageura notes that this study does not distinguish the general part and the domaindependent part of the conceptual system, nor does it fully explore the multidimensionality of the viewpoints of conceptual categorization. Kageura's honesty with regard to the complexity of terminological issues and the challenges associated with the formation of a theory of terminology is refreshing since too often in the past, the results of terminology research have been somewhat naively presented as being absolutely clearcut and applicable in all situations."
  10. Bewußtsein : Beiträge aus der Gegenwartsphilosophie (2005) 0.03
    0.032842234 = product of:
      0.082105584 = sum of:
        0.0033713488 = weight(_text_:a in 4381) [ClassicSimilarity], result of:
          0.0033713488 = score(doc=4381,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06305726 = fieldWeight in 4381, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4381)
        0.078734234 = weight(_text_:91 in 4381) [ClassicSimilarity], result of:
          0.078734234 = score(doc=4381,freq=4.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30472988 = fieldWeight in 4381, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4381)
      0.4 = coord(2/5)
    
    Classification
    Phi C 91 / Bewußtsein
    Content
    Kommentare: Metzingers blauer Sammelband [...] bietet den zur Zeit vielseitigsten und aktuellsten deutschsprachigen Einstieg in die Thematik. Obwohl als Studienwerkzeug konzipiert, kann der Band auch das Interesse weiterer Kreise gewinnen. M. Lenzen, Frankfurter Rundschau. In den letzten Jahren hat es keine Veröffentlichung gegeben, die so kenntnisreich und informativ in die Gegenwartsdiskussion um das Bewusstsein einführte. H. Breuer, Frankfurter Allgemeine Zeitung. Dieser monumentale Band ist nicht nur ein weiterer Sammelband auf dem wachsenden Markt von Büchern über Bewusstseinsforschung, sondern eine interdisziplinäre Bestandsaufnahme der philosophischen Problemstellungen, die mit der gegenwärtigen Kognitions- und Bewusstseinsforschung verbunden sind, herausgegeben von einem der führenden Vertreter dieser Bemühungen [...]; ein hervorragendes Buch, spannend zu lesen, wohl fundiert, ohne falsche Versprechungen, das »Rätsel des Bewusstseins« bald (oder jemals?) zufriedenstellend klären zu können. M. von Brück in Dialog der Religionen. Alles in allem: Dieses Werk gehört zu einem der wichtigsten Bücher der letzten Jahre zum Thema des menschlichen Bewusstseins. Mind Management. Wer heute zur Frage des Bewusstseins etwas sagen will und dies nicht nur aus neurologischer Sicht, wird an diesem Buch nicht vorbeigehen können. A. Resch, Grenzgebiete der Wissenschaft. Der Band stellt in einer bislang kaum dagewesenen Konzentration die führenden Autoren auf diesem Gebiet vor. Diese äusserst hochkarätige Textsammlung sollte nicht nur in der aktuellen Philosophie des Geistes, sondern auch in der empirischen Forschung grosse Wirkung entfalten. R. Schatta in Bundeswehr-Verwaltung. Der Leser wird, begleitet durch eine überaus sachkundige allgemeine und mehrere auf die neun Teile des Buches bezogene spezielle Einführungen des Herausgebers, durch die Diskussionslandschaft geführt. Er wird mit den begrifflichen Grundlagen der Diskussion vertraut gemacht und auf die Gratwanderung zwischen physischen und phänomenalen Wirklichkeiten geschickt. A. Ziemke, Psychologie Heute
    SBB
    Phi C 91 / Bewußtsein
  11. Kaushik, S.K.: DDC 22 : a practical approach (2004) 0.03
    0.031299718 = product of:
      0.07824929 = sum of:
        0.005448922 = weight(_text_:a in 1842) [ClassicSimilarity], result of:
          0.005448922 = score(doc=1842,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 1842, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1842)
        0.07280037 = sum of:
          0.006315058 = weight(_text_:information in 1842) [ClassicSimilarity], result of:
            0.006315058 = score(doc=1842,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.0775819 = fieldWeight in 1842, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=1842)
          0.06648531 = weight(_text_:22 in 1842) [ClassicSimilarity], result of:
            0.06648531 = score(doc=1842,freq=14.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.4094577 = fieldWeight in 1842, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1842)
      0.4 = coord(2/5)
    
    Abstract
    A system of library classification that flashed across the inquiring mind of young Melvil Louis Kossuth Dewey (known as Melvil Dewey) in 1873 is still the most popular classification scheme.. The modern library classification begins with Dewey Decimal Classification (DDC). Melvil Dewey devised DDC in 1876. DDC has is credit of 128 years of boudless success. The DDC is being taught as a practical subject throughout the world and it is being used in majority of libraries in about 150 countries. This is the result of continuous revision that 22nd Edition of DDC has been published in July 2003. No other classification scheme has published so many editions. Some welcome changes have been made in DDC 22. To reduce the Christian bias in 200 religion, the numbers 201 to 209 have been devoted to specific aspects of religion. In the previous editions these numbers were devoted to Christianity. to enhance the classifier's efficiency, Table 7 has been removed from DDC 22 and the provision of adding group of persons is made by direct use of notation already available in schedules and in notation -08 from Table 1 Standard Subdivision. The present book is an attempt to explain, with suitable examples, the salient provisions of DDC 22. The book is written in simple language so that the students may not face any difficulty in understanding what is being explained. The examples in the book are explained in a step-by-step procedure. It is hoped that this book will prove of great help and use to the library professionals in general and library and information science students in particular.
    Content
    1. Introduction to DDC 22 2. Major changes in DDC 22 3. Introduction to the schedules 4. Use of Table 1 : Standard Subdivisions 5. Use of Table 2 : Areas 6. Use of Table 3 : Subdivisions for the arts, for individual literatures, for specific literary forms 7. Use to Table 4 : Subdivisions of individual languages and language families 8. Use of Table 5 : Ethic and National groups 9. Use of Table 6 : Languages 10. Treatment of Groups of Persons
    Object
    DDC-22
  12. XML data management : native XML and XML-enabled database systems (2003) 0.03
    0.027566431 = product of:
      0.04594405 = sum of:
        0.00913812 = weight(_text_:a in 2073) [ClassicSimilarity], result of:
          0.00913812 = score(doc=2073,freq=90.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17091818 = fieldWeight in 2073, product of:
              9.486833 = tf(freq=90.0), with freq of:
                90.0 = termFreq=90.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=2073)
        0.031813435 = weight(_text_:91 in 2073) [ClassicSimilarity], result of:
          0.031813435 = score(doc=2073,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 2073, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=2073)
        0.004992492 = product of:
          0.009984984 = sum of:
            0.009984984 = weight(_text_:information in 2073) [ClassicSimilarity], result of:
              0.009984984 = score(doc=2073,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.12266775 = fieldWeight in 2073, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
    There is some debate over what exactly constitutes a native XML database. Bourret (2003) favors the wider definition; other authors such as the Butler Group (2002) restrict the use of the term to databases systems designed and built solely for storage and manipulation of XML. Two examples of the lauer (Tamino and eXist) are covered in detailed chapters here but also included in this section is the embedded XML database system, Berkeley DB XML, considered by makers Sleepycat Software to be "native" in that it is capable of storing XML natively but built an top of the Berkeley DB engine. To the uninitiated, the revelation that schemas and DTDs are not required by either Tamino or eXist might seem a little strange. Tamino implements "loose coupling" where the validation behavior can be set to "strict," "lax" (i.e., apply only to parts of a document) or "skip" (no checking), in eXist, schemas are simply optional. Many DTDs and schemas evolve as the XML documents are acquired and so these may adhere to slightly different schemas, thus the database should support queries an similar documents that do not share the same structune. In fact, because of the difficulties in mappings between XML and database (especially relational) schemas native XML databases are very useful for storage of semi-structured data, a point not made in either chapter. The chapter an embedded databases represents a "third way," being neither native nor of the XML-enabled relational type. These databases run inside purpose-written applications and are accessed via an API or similar, meaning that the application developer does not need to access database files at the operating system level but can rely an supplied routines to, for example, fetch and update database records. Thus, end-users do not use the databases directly; the applications do not usually include ad hoc end-user query tools. This property renders embedded databases unsuitable for a large number of situations and they have become very much a niche market but this market is growing rapidly. Embedded databases share an address space with the application so the overhead of calls to the server is reduced, they also confer advantages in that they are easier to deploy, manage and administer compared to a conventional client-server solution. This chapter is a very good introduction to the subject, primers an generic embedded databases and embedded XML databases are helpfully provided before the author moves to an overview of the Open Source Berkeley system. Building an embedded database application makes far greater demands an the software developer and the remainder of the chapter is devoted to consideration of these programming issues.
    Relational database Management systems have been one of the great success stories of recent times and sensitive to the market, Most major vendors have responded by extending their products to handle XML data while still exploiting the range of facilities that a modern RDBMS affords. No book of this type would be complete without consideration of the "big these" (Oracle 9i, DB2, and SQL Server 2000 which each get a dedicated chapter) and though occasionally overtly piece-meal and descriptive the authors all note the shortcomings as well as the strengths of the respective systems. This part of the book is somewhat dichotomous, these chapters being followed by two that propose detailed solutions to somewhat theoretical problems, a generic architecture for storing XML in a RDBMS and using an object-relational approach to building an XML repository. The biography of the author of the latter (Paul Brown) contains the curious but strangely reassuring admission that "he remains puzzled by XML." The first five components are in-depth case studies of XMLdatabase applications. Necessarily diverse, few will be interested in all the topics presented but I was particularly interested in the first case study an bioinformatics. One of the twentieth century's greatest scientific undertakings was the Human Genome Project, the quest to list the information encoded by the sequence of DNA that makes up our genes and which has been referred to as "a paradigm for information Management in the life sciences" (Pearson & Soll, 1991). After a brief introduction to molecular biology to give the background to the information management problems, the authors turn to the use of XML in bioinformatics. Some of the data are hierarchical (e.g., the Linnaean classification of a human as a primate, primates as mammals, mammals are all vertebrates, etc.) but others are far more difficult to model. The Human Genome Project is virtually complete as far as the data acquisition phase is concerned and the immense volume of genome sequence data is no longer a very significant information Management issue per se. However bioinformaticians now need to interpret this information. Some data are relatively straightforward, e.g., the positioning of genes and sequence elements (e.g., promoters) within the sequences, but there is often little or no knowledge available an the direct and indirect interactions between them. There are vast numbers of such interrelationships; many complex data types and novel ones are constantly emerging, necessitating an extensible approach and the ability to manage semi-structured data. In the past, object databases such as AceDB (Durbin & Mieg, 1991) have gone some way to Meeting these aims but it is the combination of XML and databases that more completely addresses knowledge Management requirements of bioinformatics. XML is being enthusiastically adopted with a plethora of XML markup standards being developed, as authors Direen and Jones note "The unprecedented degree and flexibility of XML in terms of its ability to capture information is what makes it ideal for knowledge Management and for use in bioinformatics."
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  13. Calvin, W.H.: ¬Der Sprache des Gehirns : Wie in unserem Bewußtsein Gedanken entstehen (2002) 0.03
    0.027334882 = product of:
      0.0683372 = sum of:
        0.005448922 = weight(_text_:a in 1161) [ClassicSimilarity], result of:
          0.005448922 = score(doc=1161,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1161)
        0.06288828 = sum of:
          0.012630116 = weight(_text_:information in 1161) [ClassicSimilarity], result of:
            0.012630116 = score(doc=1161,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 1161, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=1161)
          0.050258167 = weight(_text_:22 in 1161) [ClassicSimilarity], result of:
            0.050258167 = score(doc=1161,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.30952093 = fieldWeight in 1161, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1161)
      0.4 = coord(2/5)
    
    Date
    11.11.2002 14:30:22
    Footnote
    Titel der Originalausgabe: The cerebral code: thinking a thought in the mosaics of the mind. Deutsche Ausgabe 2000 bei Hanser
    Theme
    Information
  14. Henrich, A.: Information Retrieval : Grundlagen, Modelle und Anwendungen (2008) 0.03
    0.027334882 = product of:
      0.0683372 = sum of:
        0.005448922 = weight(_text_:a in 1525) [ClassicSimilarity], result of:
          0.005448922 = score(doc=1525,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 1525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1525)
        0.06288828 = sum of:
          0.012630116 = weight(_text_:information in 1525) [ClassicSimilarity], result of:
            0.012630116 = score(doc=1525,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 1525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=1525)
          0.050258167 = weight(_text_:22 in 1525) [ClassicSimilarity], result of:
            0.050258167 = score(doc=1525,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.30952093 = fieldWeight in 1525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1525)
      0.4 = coord(2/5)
    
    Date
    22. 8.2015 21:23:08
  15. Virtuelle Welten im Internet : Tagungsband ; [Vorträge und Diskussionen der Fachkonferenz des Münchner Kreises am 21. November 2007] / [Münchner Kreis] (2008) 0.03
    0.026991937 = product of:
      0.06747984 = sum of:
        0.00385297 = weight(_text_:a in 2926) [ClassicSimilarity], result of:
          0.00385297 = score(doc=2926,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.072065435 = fieldWeight in 2926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2926)
        0.06362687 = weight(_text_:91 in 2926) [ClassicSimilarity], result of:
          0.06362687 = score(doc=2926,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.24625893 = fieldWeight in 2926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=2926)
      0.4 = coord(2/5)
    
    Classification
    QR 760 Wirtschaftswissenschaften / Gewerbepolitik. Einzelne Wirtschaftszweige / Industrie, Bergbau, Handel, Dienstleistungen, Handwerk / Öffentliche Versorgungseinrichtungen. Elektrizität. Gas. Wasser / Informationsgewerbe (Massenmedien). Post / Neue Medien. Online-Dienste (Internet u. a.)
    Footnote
    Rez. in: Mitt VÖB 62(2009) H.1, S.91-92 (M. Buzinkay)
    RVK
    QR 760 Wirtschaftswissenschaften / Gewerbepolitik. Einzelne Wirtschaftszweige / Industrie, Bergbau, Handel, Dienstleistungen, Handwerk / Öffentliche Versorgungseinrichtungen. Elektrizität. Gas. Wasser / Informationsgewerbe (Massenmedien). Post / Neue Medien. Online-Dienste (Internet u. a.)
  16. Kleinwächter, W.: Macht und Geld im Cyberspace : wie der Weltgipfel zur Informationsgesellschaft (WSIS) die Weichen für die Zukunft stellt (2004) 0.03
    0.026865512 = product of:
      0.06716378 = sum of:
        0.0034055763 = weight(_text_:a in 145) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=145,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=145)
        0.0637582 = sum of:
          0.019335838 = weight(_text_:information in 145) [ClassicSimilarity], result of:
            0.019335838 = score(doc=145,freq=12.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23754507 = fieldWeight in 145, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=145)
          0.044422362 = weight(_text_:22 in 145) [ClassicSimilarity], result of:
            0.044422362 = score(doc=145,freq=4.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.27358043 = fieldWeight in 145, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=145)
      0.4 = coord(2/5)
    
    Abstract
    Im Dezember 2003 fand in Genf die erste Phase des UN-Weltgipfels zur Informationsgesellschaft (WSIS) statt. Die Gipfelkonferenz, an der mehr als 11.000 Vertreter von Regierungen, der privaten Wirtschaft und der Zivilgesellschaft teilnahmen, verhandelte u. a. Themen wie die Überwindung der digitalen Spaltung, Menschenrechte im Informationszeitalter, geistige Eigentumsrechte, Cyberkriminalität und Internet Governance. Das vorliegende Buch stellt den WSIS-Gipfel in den historischen Kontext 200-jähriger internationaler Verhandlungen zur Regulierung grenzüberschreitender Kommunikation -- von den Karlsbader Verträgen 1819 bis zur Entstehung des Internets. Es beschreibt die spannenden und kontroversen Auseinandersetzungen darüber, wie das Internet reguliert, Menschenrechte im Informationszeitalter garantiert, Sicherheit im Cyberspace gewährleistet, geistiges Eigentum geschützt und die digitale Spaltung überbrückt werden soll. Kleinwächter lässt keinen Zweifel daran, dass der WSIS-Kompromiss von Genf nicht mehr ist als der Beginn eines langen Prozesses zur Gestaltung der globalen Informationsgesellschaft der Zukunft. Die zweite Phase des Gipfeltreffens findet im November 2005 in Tunis statt. Das Buch enthält im Anhang die vom Gipfel verabschiedete Deklaration und den Aktionsplan sowie die von der Zivilgesellschaft angenommene Erklärung zur Zukunft der Informationsgesellschaft.
    Date
    20.12.2006 18:22:32
    Isbn
    3-936931-22-4
    LCSH
    World Summit on the Information Society ; Information society ; Digital divide
    Information society
    Subject
    World Summit on the Information Society ; Information society ; Digital divide
    Information society
  17. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.03
    0.025809865 = product of:
      0.04301644 = sum of:
        0.0073358365 = weight(_text_:a in 1796) [ClassicSimilarity], result of:
          0.0073358365 = score(doc=1796,freq=58.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1372085 = fieldWeight in 1796, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
        0.031813435 = weight(_text_:91 in 1796) [ClassicSimilarity], result of:
          0.031813435 = score(doc=1796,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 1796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
        0.0038671675 = product of:
          0.007734335 = sum of:
            0.007734335 = weight(_text_:information in 1796) [ClassicSimilarity], result of:
              0.007734335 = score(doc=1796,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09501803 = fieldWeight in 1796, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this collection of nearly 50 articles written by librarians, computer specialists, and other information professionals, the reader finds 10 chapters, each devoted to a problem or a side effect that has emerged since the introduction of the Internet: control over selection, survival of the book, training users, adapting to users' expectations, access issues, cost of technology, continuous retraining, legal issues, disappearing data, and how to avoid becoming blind sided. After stating a problem, each chapter offers solutions that are subsequently supported by articles. The editor's comments, which appear throughout the text, are an added bonus, as are the sections concluding the book, among them a listing of useful URLs, a works-cited section, and a comprehensive index. This book has much to recommend it, especially the articles, which are not only informative, thought-provoking, and interesting but highly readable and accessible as well. An indispensable tool for all librarians.
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
    Some of the pieces are more captivating than others and less "how-to" in nature, providing contextual discussions as well as pragmatic advice. For example, Darlene Fichter's "Blogging Your Life Away" is an interesting discussion about creating and maintaining blogs. (For those unfamiliar with the term, blogs are frequently updated Web pages that ]ist thematically tied annotated links or lists, such as a blog of "Great Websites of the Week" or of "Fun Things to Do This Month in Patterson, New Jersey.") Fichter's article includes descriptions of sample blogs and a comparison of commercially available blog creation software. Another article of note is Kelly Broughton's detailed account of her library's experiences in initiating Web-based reference in an academic library. "Our Experiment in Online Real-Time Reference" details the decisions and issues that the Jerome Library staff at Bowling Green State University faced in setting up a chat reference service. It might be useful to those finding themselves in the same situation. This volume is at its best when it eschews pragmatic information and delves into the deeper, less ephemeral libraryrelated issues created by the rise of the Internet and of the Web. One of the most thought-provoking topics covered is the issue of "the serials pricing crisis," or the increase in subscription prices to journals that publish scholarly work. The pros and cons of moving toward a more free-access Web-based system for the dissemination of peer-reviewed material and of using university Web sites to house scholars' other works are discussed. However, deeper discussions such as these are few, leaving the volume subject to rapid aging, and leaving it with an audience limited to librarians looking for fast technological fixes."
    Imprint
    Medford, NJ : Information Today
  18. Knowledge management in practice : connections and context. (2008) 0.03
    0.025345284 = product of:
      0.06336321 = sum of:
        0.0072082467 = weight(_text_:a in 2749) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=2749,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 2749, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2749)
        0.056154966 = sum of:
          0.012630116 = weight(_text_:information in 2749) [ClassicSimilarity], result of:
            0.012630116 = score(doc=2749,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 2749, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=2749)
          0.04352485 = weight(_text_:22 in 2749) [ClassicSimilarity], result of:
            0.04352485 = score(doc=2749,freq=6.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.268053 = fieldWeight in 2749, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2749)
      0.4 = coord(2/5)
    
    BK
    85.20 / Betriebliche Information und Kommunikation
    Classification
    658.4/038 22
    85.20 / Betriebliche Information und Kommunikation
    Date
    22. 3.2009 18:43:51
    DDC
    658.4/038 22
    Footnote
    Rez. in: JASIST 60(2006) no.3, S.642 (A.E. Prentice): "What is knowledge management (KM)? How do we define it? How do we use it and what are the benefits? KM is still an operational discipline that has yet to have an academic foundation. Its core has yet to solidify and concepts and practices remain fluid, making it difficult to discuss or even to identify the range of relevant elements. Being aware of this lack of a well-structured retrievable disciplinary literature, the editors made a practice of attending trade shows and conferences attended by KM professionals to look for presentations that would in some way expand knowledge of the field. They asked presenters to turn their paper into a book chapter, which is the major source of the material in this book. Although this is a somewhat chancy method of identifying authors and topics, several of the papers are excellent and a number add to an understanding of KM. Because of the fluidity of the area of study, the editors devised a three-dimensional topic expansion approach to the content so that the reader can follow themes in the papers that would not have been easy to do if one relied solely on the table of contents. The table of contents organizes the presentations into eight subject sections, each section with a foreword that introduces the topic and indicates briefly the contribution of each chapter to the overall section title. Following this, the Roadmap lists 18 topics or themes that appear in the book and relevant chapters where information on the theme can be found. Readers have the choice of following themes using the roadmap or of reading the book section by section. ..."
    Imprint
    Medford, NJ : Information Today
  19. XML in libraries (2002) 0.03
    0.025292888 = product of:
      0.04215481 = sum of:
        0.006811153 = weight(_text_:a in 3100) [ClassicSimilarity], result of:
          0.006811153 = score(doc=3100,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 3100, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.031813435 = weight(_text_:91 in 3100) [ClassicSimilarity], result of:
          0.031813435 = score(doc=3100,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.0035302248 = product of:
          0.0070604496 = sum of:
            0.0070604496 = weight(_text_:information in 3100) [ClassicSimilarity], result of:
              0.0070604496 = score(doc=3100,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0867392 = fieldWeight in 3100, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  20. Learning XML (2003) 0.03
    0.025292888 = product of:
      0.04215481 = sum of:
        0.006811153 = weight(_text_:a in 3101) [ClassicSimilarity], result of:
          0.006811153 = score(doc=3101,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 3101, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.031813435 = weight(_text_:91 in 3101) [ClassicSimilarity], result of:
          0.031813435 = score(doc=3101,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 3101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.0035302248 = product of:
          0.0070604496 = sum of:
            0.0070604496 = weight(_text_:information in 3101) [ClassicSimilarity], result of:
              0.0070604496 = score(doc=3101,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0867392 = fieldWeight in 3101, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."

Languages

Types

  • s 248
  • i 27
  • b 9
  • el 7
  • x 7
  • r 2
  • n 1
  • More… Less…

Themes

Subjects

Classifications