Search (215 results, page 1 of 11)

  • × type_ss:"m"
  • × year_i:[2000 TO 2010}
  1. Survey of text mining : clustering, classification, and retrieval (2004) 0.08
    0.07873234 = product of:
      0.11809851 = sum of:
        0.08966068 = weight(_text_:systematic in 804) [ClassicSimilarity], result of:
          0.08966068 = score(doc=804,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 804) [ClassicSimilarity], result of:
              0.05687567 = score(doc=804,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 804, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  2. Haravu, L.J.: Lectures on knowledge management : paradigms, challenges and opportunities (2002) 0.05
    0.047876112 = product of:
      0.071814165 = sum of:
        0.06339968 = weight(_text_:systematic in 2048) [ClassicSimilarity], result of:
          0.06339968 = score(doc=2048,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.22326067 = fieldWeight in 2048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2048)
        0.008414488 = product of:
          0.016828977 = sum of:
            0.016828977 = weight(_text_:22 in 2048) [ClassicSimilarity], result of:
              0.016828977 = score(doc=2048,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.09672529 = fieldWeight in 2048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2048)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.42-44 (D. Mercier): "This work is a collection of lecture notes following the 22"d Sarada Ranganathan Endowment Lectures which took place in Bangalore, India, from 4-6 December 2000. This compilation has been divided into four sections: historical introduction, compilation of several definitions about knowledge and its management, impacts of knowledge management (KM) an information professionals and, review of information technologies as tools for knowledge management. The aim of this book is to provide "a succinct overview of various aspects of knowledge management, particularly in companies" (p. v). Each chapter focuses an a dominant text in a specific area. Most of the quoted authors are known consultants in KM. Each chapter is similarly handled: a review of a dominant book, some subject matter from a few other consultants and, last but not least, comments an a few broadly cited cases. Each chapter is uneven with regards to the level of detail provided, and ending summaries, which would have been useful, are missing. The book is structured in two parts containing five chapters each. The first part is theoretical, the second deals with knowledge workers and technologies. Haravu begins the first chapter with a historical overview of information and knowledge management (IKM) essentially based an the review previously made by Drucker (1999). Haravu emphasises the major facts and events of the discipline from the industrial revolution up to the advent of the knowledge economy. On the whole, this book is largely technology-oriented. The lecturer presents micro-economic factors contributing to the economic perspective of knowledge management, focusing an the existing explicit knowledge. This is Haravu's prevailing perspective. He then offers a compilation of definitions from Allee (1997) and Sveiby (1997), both known for their contribution in the area of knowledge evaluation. As many others, Haravu confirms his assumption regarding the distinction between information and knowledge, and the knowledge categories: explicit and tacit, both actions oriented and supported by rules (p. 43). The SECI model (Nonaka & Takeuchi, 1995), also known as "knowledge conversion spiral" is described briefly, and the theoretically relational dimension between individual and collectivities is explained. Three SECI linked concepts appear to be missing: contexts in movement, intellectual assets and leadership.
    Haravu makes a rather original analogy with Ranganathan's theory of "spiral of subjects development". This will be of particular interest for those working in knowledge organisation. The last third of this chapter covers the Allee's "Knowledge Complexity Framework", defining the Knowledge Archetype, the learning and performance framework, and twelve principles of knowledge management (p. 55-66). In the third chapter, Haravu describes at first and extensively KM interdisciplinary features and its contributive disciplines (and technologies): cognitive science, expert systems, artificial intelligence, knowledge-based systems, computer-supported collaborative work, library and information science, technical writing, document management, decision support systems, semantic networks, relational and object databases, Simulation and organisational science. This combination of disciplines and technologies is aligned with the systematic approach chosen in the first chapter. After a combined definition of knowledge management (Malhotra, 1998; Sveiby, 1997), Haravu surveys three specific approaches of the knowledge economic perspective: core-competency (Godbout, 1998), leveraging and managing intangible assets (Sveiby, 1997), and expanding an organisationas capacity to learn and share knowledge (Allee, 1997). Then, he describes again Sveiby's and Allee's frameworks, largely borrowing from the Sveiby's "six KM strategies" (p. 101). For each approach, he summarizes a case study from the reviewed authors. The final section section is a summary of broadly cited case studies (Buchman Laboratories and Hoffman-Laroche). On a practical basis, Haravu underlines the Impacts of KM practices an knowledge workers, particularly information professionals. The major activity of information professionals is adding value to information: filtering, validating, analysing, synthesising, presenting and prevading facilities to access and use. Leadership in knowledge management processes is rapidly detailed. At the end of this chapter, the author describes information professionals' core competencies required in organisational knowledge management and refer to the Andersen Consulting and Chevron's cases. From this perspective, new collaborative roles in KM for information professionals are omitted.
    On the other hand, from the economic perspective of knowledge management, the role of technology is dominant. The last chapter presents, in details, tools and technologies used by, or potentially useful to, KM practitioners. This chapter discusses the Tiwana (2000) framework and cases. This framework has several meta-component categories: knowledge flow, information mapping, information sources, information and knowledge exchange, and intelligent agent and network mining. In summarizing the Tiwana (2000) study, Haravu gives generic characteristics to the most prevailing tools. To downplay the predominance of technologies, Haravu concludes his book with a discussion of three KM technology myths. This compilation of notes is a real patchwork with some sewing mistakes. In order to be able to read and understand it better, one would have to rewrite a detailed table of contents since many numbering errors and incoherence appear in all the chapters. Levels of details are different in each chapter. As one reads along, many details are repeated. Bibliographic references are incomplete and there are no citations for figures or tables. This book looks like a draft companion for those who attended the lecture, but it is not clear why it becomes available as late as two years after the event. KM is a new discipline in constant evolution. In contrast, the book seems to be a demonstration of a mature and stable discipline. In this publication, Haravu fails to display the plurality of paradigmatic KM dimensions, challenges and opportunities. The compilation is not original and reflects the very traditional style of the first generation of KM specialists. Following thousands of books and articles written about KM, this compilation still Shows a systematic or economic perspective of KM, in which the systemic approach is omitted and KM duality ignored. Annotated bibliographies are to be preferred to Haravu's patchwork."
  3. Culture and identity in knowledge organization : Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada (2008) 0.04
    0.04149659 = product of:
      0.062244885 = sum of:
        0.04483034 = weight(_text_:systematic in 2494) [ClassicSimilarity], result of:
          0.04483034 = score(doc=2494,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.15786913 = fieldWeight in 2494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2494)
        0.017414546 = product of:
          0.03482909 = sum of:
            0.03482909 = weight(_text_:indexing in 2494) [ClassicSimilarity], result of:
              0.03482909 = score(doc=2494,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.1831313 = fieldWeight in 2494, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    MULTILINGUAL AND MULTICULTURAL ENVIRONMENTS K. S. Raghavan and A. Neelameghan. Design and Development of a Bilingual Thesaurus for Classical Tamil Studies: Experiences and Issues. - Elaine Menard. Indexing and Retrieving Images in a Multilingual World. Maria Odaisa Espinheiro de Oliveira. Knowledge Representation Focusing Amazonian Culture. - Agnes Hajdu Barät. Knowledge Organization in the Cross-cultural and Multicultural Society. - Joan S. Mitchell, Ingebjorg Rype and Magdalena Svanberg. Mixed Translation Models for the Dewey Decimal Classification (DDC) System. - Kathryn La Barre. Discovery and Access Systems for Websites and Cultural Heritage Sites: Reconsidering the Practical Application of Facets. - Mats Dahlström and Joacim Hansson. On the Relation Between Qualitative Digitization and Library Institutional Identity. - Amelia A breu. "Every Bit Informs Another": Framework Analysis for Descriptive Practice and Linked Information. - Jenn Riley. Moving from a Locally-developed Data Model to a Standard Conceptual Model. - Jan Pisanski and Maja Zumer. How Do Non-librarians See the Bibliographie Universe?
    KNOWLEDGE ORGANIZATION FOR INFORMATION MANAGEMENT AND RETRIEVAL Sabine Mas, L'Hedi Zäher and Manuel Zacklad. Design and Evaluation of Multi-viewed Knowledge System for Administrative Electronic Document Organization. - Xu Chen. The Influence of Existing Consistency Measures on the Relationship Between Indexing Consistency and Exhaustivity. - Michael Buckland and Ryan Shaw. 4W Vocabulary Mapping Across Diverse Reference Genres. - Abdus Sattar Chaudhry and Christopher S. G. Khoo. A Survey of the Top-level Categories in the Structure of Corporate Websites. - Nicolas L. George, Elin K. Jacob, Lijiang Guo, Lala Hajibayova and M Yasser Chuttur. A Case Study of Tagging Patteras in del.icio.us. - Kwan Yi and Lois Mai Chan. A Visualization Software Tool for Library of Congress Subject Headings. - Gercina Angela Borem Oliveira Lima. Hypertext Model - HTXM: A Model for Hypertext Organization of Documents. - Ali Shiri and Thane Chambers. Information Retrieval from Digital Libraries: Assessing the Potential Utility of Thesauri in Supporting Users' Search Behaviour in an Interdisciplinary Domain. - Verönica Vargas and Catalina Naumis. Water-related Language Analysis: The Need for a Thesaurus of Mexican Terminology. - Amanda Hill. What's in a Name?: Prototyping a Name Authority Service for UK Repositories. - Rick Szostak and Claudio Gnoli. Classifying by Phenomena, Theories and Methods: Examples with Focused Social Science Theories.
    EPISTEMOLOGICAL FOUNDATIONS OF KNOWLEDGE ORGANIZATION H. Peter Ohly. Knowledge Organization Pro and Retrospective. Judith Simon. Knowledge and Trust in Epistemology and Social Software/Knowledge Technologies. - D. Grant Campbell. Derrida, Logocentrism, and the Concept of Warrant on the Semantic Web. - Jian Qin. Controlled Semantics Versus Social Semantics: An Epistemological Analysis. - Hope A. Olson. Wind and Rain and Dark of Night: Classification in Scientific Discourse Communities. - Thomas M. Dousa. Empirical Observation, Rational Structures, and Pragmatist Aims: Epistemology and Method in Julius Otto Kaiser's Theory of Systematic Indexing. - Richard P. Smiraglia. Noesis: Perception and Every Day Classification. Birger Hjorland. Deliberate Bias in Knowledge Organization? Joseph T. Tennis and Elin K. Jacob. Toward a Theory of Structure in Information Organization Frameworks. - Jack Andersen. Knowledge Organization as a Cultural Form: From Knowledge Organization to Knowledge Design. - Hur-Li Lee. Origins of the Main Classes in the First Chinese Bibliographie Classification. NON-TEXTUAL MATERIALS Abby Goodrum, Ellen Hibbard, Deborah Fels and Kathryn Woodcock. The Creation of Keysigns American Sign Language Metadata. - Ulrika Kjellman. Visual Knowledge Organization: Towards an International Standard or a Local Institutional Practice?
  4. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.04
    0.039076358 = product of:
      0.058614537 = sum of:
        0.03586427 = weight(_text_:systematic in 664) [ClassicSimilarity], result of:
          0.03586427 = score(doc=664,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.1262953 = fieldWeight in 664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.015625 = fieldNorm(doc=664)
        0.022750268 = product of:
          0.045500536 = sum of:
            0.045500536 = weight(_text_:indexing in 664) [ClassicSimilarity], result of:
              0.045500536 = score(doc=664,freq=16.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23924173 = fieldWeight in 664, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Knowledge organization 34(2007) no.1, S.58-60 (P. Buizza): "This Nuovo soggettario is the first sign of subject indexing renewal in Italy. Italian subject indexing has been based until now on Soggettario per i cataloghi delle biblioteche italiane (Firenze, 1956), a list of preferred terms and see references, with suitable hierarchical subdivisions and cross references, derived from the subject catalogue of the National Library in Florence (BNCF). New headings later used in Bibliografia nazionale italiana (BNI) were added without references, nor indeed with any real maintenance. Systematic instructions on how to combine the terms are lacking: the indexer using this instrument is obliged to infer the order of terms absent from the lists by consulting analogous entries. Italian libraries are suffering from the limits of this subject catalogue: vocabulary is inadequate, obsolete and inconsistent, the syndetic structure incomplete and inaccurate, and the syntax ill-defined, poorly explained and unable to reflect complex subjects. In the nineties, the Subject Indexing Research Group (Gruppo di ricerca sull'indicizzazione per soggetto, GRIS) of the AIB (Italian Library Association) developed the indexing theory and some principles of PRECIS and drew up guidelines based on consistent principles for vocabulary, semantic relationships and subject string construction, the latter according to role syntax (Guida 1997). In overhauling the Soggettario, the National Library in Florence aimed at a comprehensive indexing system. (A report on the method and evolution of the work has been published in Knowledge Organization (Lucarelli 2005), while the feasibility study is available in Italian (Per un nuovo Soggettario 2002). Any usable terms from the old Soggettario will be transferred to the new system, while taking into consideration international norms and interlinguistic compatibility, as well as applications outside the immediate library context. The terms will be accessible via a suitable OPAC operating on the most advanced software.
    An entry is structured so as to present all the essential elements of the indexing system. For each term are given: category, facet, related terms, Dewey interdisciplinary class number and, if necessary; definition or scope notes. Sources used are referenced (an appendix in the book lists those used in the current work). Historical notes indicate whenever a change of term has occurred, thus smoothing the transition from the old lists. In chapter 5, the longest one, detailed instructions with practical examples show how to create entries and how to relate terms; upper relationships must always be complete, right up to the top term, whereas hierarchies of related terms not yet fully developed may remain unfinished. Subject string construction consists in a double operation: analysis and synthesis. The former is the analysis of logical functions performed by single concepts in the definition of the subject (e.g., transitive actions, object, agent, etc.) or in syntactic relationships (transitive relationships and belonging relationship), so that each term for those concepts is assigned its role (e.g., key concept, transitive element, agent, instrument, etc.) in the subject string, where the core is distinct from the complementary roles (e.g., place, time, form, etc.). Synthesis is based on a scheme of nuclear and complementary roles, and citation order follows agreed-upon principles of one-to-one relationships and logical dependence. There is no standard citation order based on facets, in a categorial logic, but a flexible one, although thorough. For example, it is possible for a time term (subdivision) to precede an action term, when the former is related to the latter as the object of action: "Arazzi - Sec. 16.-17. - Restauro" [Tapestry - 16th-17th century - Restoration] (p. 126). So, even with more complex subjects, it is possible to produce perfectly readable strings covering the whole of the subject matter without splitting it into two incomplete and complementary headings. To this end, some unusual connectives are adopted, giving the strings a more discursive style.
    Thesaurus software is based on AgroVoc (http:// www.fao.org/aims/ag_intro.htm) provided by the FAO, but in modified form. Many searching options and contextualization within the full hierarchies are possible, so that the choice of morphology and syntax of terms and strings is made easier by the complete overview of semantic relationships. New controlled terms will be available soon, thanks to the work in progress - there are now 13,000 terms, of which 40 percent are non-preferred. In three months, free Internet access by CD-ROM will cease and a subscription will be needed. The digital version of old Soggettario and the corresponding unstructured lists of headings adopted in 1956-1985 are accessible together with the thesaurus, so that the whole vocabulary, old and new, will be at the fingertips of the indexer, who is forced to work with both tools during this transition period. In the future, it will be possible to integrate the thesaurus into library OPACs. The two parts form a very consistent and detailed resource. The guide is filled with examples; the accurate, clearly-expressed and consistent instructions are further enhanced by good use of fonts and type size, facilitating reading. The thesaurus is simple and quick to use, very rich, albeit only a prototype; see, for instance, a list of DDC numbers and related terms with their category and facet, and then entries, hierarchies and so on, and the capacity of the structure to show organized knowledge. The excellent outcome of a demanding experimentation, the intended guide welcomes in a new era of subject indexing in Italy and is highly recommended. The new method has been designed to be easily teachable to new and experimented indexers.
    Now BNI is beginning to use the new language, pointing the way for the adoption of Nuovo soggettario in Italian libraries: a difficult challenge whose success is not assured. To name only one issue: including all fields of study requires particular care in treating terms with different specialized meanings; cooperation of other libraries and institutions is foreseen. At the same time, efforts are being made to assure the system's interoperability outside the library world. It is clear that a great commitment is required. "Too complex a system!" say the naysayers. "Only at the beginning," the proponents reply. The new system goes against the mainstream, compared with the imitation of the easy way offered by search engines - but we know that they must enrich their devices to improve quality, just repeating the work on semantic and syntactic relationships that leads formal expressions to the meanings they are intended to communicate - and also compared with research to create automated devices supporting human work, for the need to simplify cataloguing. Here AI is not involved, but automation is widely used to facilitate and to support the conscious work of indexers guided by rules as clear as possible. The advantage of Nuovo soggettario is its combination of a thesaurus (a much-appreciated tool used across the world) with the equally widespread technique of subject-string construction, which is to say: the rational and predictable combination of the terms used. The appearance of this original, unparalleled working model may well be a great occasion in the international development of indexing, as, on one hand, the Nuovo soggettario uses a recognized tool (the thesaurus) and, on the other, by permitting both pre-coordination and post-coordination, it attempts to overcome the fragmentation of increasingly complex and specialized subjects into isolated, single-term descriptors. This is a serious proposition that merits consideration from both theoretical and practical points of view - and outside Italy, too."
  5. Dynamism and stability in knowledge organization : Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada (2000) 0.04
    0.036589757 = product of:
      0.054884635 = sum of:
        0.04483034 = weight(_text_:systematic in 5892) [ClassicSimilarity], result of:
          0.04483034 = score(doc=5892,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.15786913 = fieldWeight in 5892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5892)
        0.010054292 = product of:
          0.020108584 = sum of:
            0.020108584 = weight(_text_:indexing in 5892) [ClassicSimilarity], result of:
              0.020108584 = score(doc=5892,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.105730906 = fieldWeight in 5892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5892)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Enthält die Beiträge: MITCHELL, J.S., D. Vizine-Goetz: DDC taxonomy server. ALBRECHTSEN, H.: The dynamism and stability of classification in information ecologies: problems and possibilities. OLSON, H.O.: Reading "Primitive Classification" and misreading cultures: the metaphysics of social and logical classification. JACOB, E.K.: The legacy of pragmatism: implications for knowledge organization in a pluralistic universe. MAI, J.E.: Likeness: a pragmatic approach. SOLOMON, P.: Exploring structuration in knowledge organization: implications for managing the tension between stability and dynamism. CARDOSO, A.M.P., J.C. BEMFICA u. M.N. BORGES: Information and organizational knowledge faced with contemporary knowledge theories: unveiling the strength of the myth. JURISICA, I.: Knowledge organization by systematic knowledge management and discovery. BREITENSTEIN, M.: Classification, culture studies, and the experience of the individual: three methods for knowledge discovery. CHRISTENSEN, F.S.: Power and the production of truth in the sciences. LABARRE, K. : Bliss and Ranganathan: synthesis, synchronicity our sour grapes?. NEELAMEGHAN, A.: Dynamism and stability in knowledge organization tools: S.R. Ranganathan's contributions. BROUGHTON, V.: Structural, linguistic and mathematical elements in indexing languages and search engines: implications for the use of index languages in electronic and non-LIS environments. BEGHTOL, C.: A whole, its kinds, and its parts. FALLIS, D., K. MATHIESEN: Consistency rules for classification schemes (or how to organize your beanie babies). CAMPBELL, G.: The relevance of traditional classification principles in the development and use of semantic markup languages for electronic text. KENT, R.E.: The information flow foundation for conceptual knowledge organization.
  6. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.04
    0.036589757 = product of:
      0.054884635 = sum of:
        0.04483034 = weight(_text_:systematic in 468) [ClassicSimilarity], result of:
          0.04483034 = score(doc=468,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.15786913 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.010054292 = product of:
          0.020108584 = sum of:
            0.020108584 = weight(_text_:indexing in 468) [ClassicSimilarity], result of:
              0.020108584 = score(doc=468,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.105730906 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
  7. Relational data mining (2001) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1303) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1303,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1303)
      0.33333334 = coord(1/3)
    
    Abstract
    As the first book devoted to relational data mining, this coherently written multi-author monograph provides a thorough introduction and systematic overview of the area. The ferst part introduces the reader to the basics and principles of classical knowledge discovery in databases and inductive logic programmeng; subsequent chapters by leading experts assess the techniques in relational data mining in a principled and comprehensive way; finally, three chapters deal with advanced applications in various fields and refer the reader to resources for relational data mining. This book will become a valuable source of reference for R&D professionals active in relational data mining. Students as well as IT professionals and ambitioned practitioners interested in learning about relational data mining will appreciate the book as a useful text and gentle introduction to this exciting new field.
  8. Kageura, K.: ¬The dynamics of terminology : a descriptive theory of term formation and terminological growth (2002) 0.04
    0.035496555 = product of:
      0.05324483 = sum of:
        0.04483034 = weight(_text_:systematic in 1787) [ClassicSimilarity], result of:
          0.04483034 = score(doc=1787,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.15786913 = fieldWeight in 1787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1787)
        0.008414488 = product of:
          0.016828977 = sum of:
            0.016828977 = weight(_text_:22 in 1787) [ClassicSimilarity], result of:
              0.016828977 = score(doc=1787,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.09672529 = fieldWeight in 1787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discovery of rules for the systematicity and dynamics of terminology creations is essential for a sound basis of a theory of terminology. This quest provides the driving force for the dynamics of terminology in which Dr Kageura demonstrates the interaction of these two factors on a specific corpus of Japanese terminology which, beyond the necessary linguistic circumstances, also has a model character for similar studies. His detailed examination of the relationships between terms and their constituent elements, the relationships among the constituent elements and the type of conceptual combinations used in the construction of the terminology permits deep insights into the systematic thought processes underlying term creation. To compensate for the inherent limitation of a purely descriptive analysis of conceptual patterns, Dr. Kageura offers a quantitative analysis of the patterns of the growth of terminology.
    Date
    22. 3.2008 18:18:53
  9. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.03
    0.032956343 = product of:
      0.098869026 = sum of:
        0.098869026 = sum of:
          0.071942665 = weight(_text_:indexing in 2426) [ClassicSimilarity], result of:
            0.071942665 = score(doc=2426,freq=10.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.3782744 = fieldWeight in 2426, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
          0.026926363 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
            0.026926363 = score(doc=2426,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.15476047 = fieldWeight in 2426, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 7th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2003, held in Trondheim, Norway in August 2003. The 39 revised full papers and 8 revised short papers presented were carefully reviewed and selected from 161 submissions. The papers are organized in topical sections on uses, users, and user interfaces; metadata applications; annotation and recommendation; automatic classification and indexing; Web technologies; topical crawling and subject gateways; architectures and systems; knowledge organization; collection building and management; information retrieval; digital preservation; and indexing and searching of special documents and collection information.
    Content
    Inhalt: Uses, Users, and User Interaction Metadata Applications - Semantic Browsing / Alexander Faaborg, Carl Lagoze Annotation and Recommendation Automatic Classification and Indexing - Cross-Lingual Text Categorization / Nuria Bel, Cornelis H.A. Koster, Marta Villegas - Automatic Multi-label Subject Indexing in a Multilingual Environment / Boris Lauser, Andreas Hotho Web Technologies Topical Crawling, Subject Gateways - VASCODA: A German Scientific Portal for Cross-Searching Distributed Digital Resource Collections / Heike Neuroth, Tamara Pianos Architectures and Systems Knowledge Organization: Concepts - The ADEPT Concept-Based Digital Learning Environment / T.R. Smith, D. Ancona, O. Buchel, M. Freeston, W. Heller, R. Nottrott, T. Tierney, A. Ushakov - A User Evaluation of Hierarchical Phrase Browsing / Katrina D. Edgar, David M. Nichols, Gordon W. Paynter, Kirsten Thomson, Ian H. Witten - Visual Semantic Modeling of Digital Libraries / Qinwei Zhu, Marcos Andre Gongalves, Rao Shen, Lillian Cassell, Edward A. Fox Collection Building and Management Knowledge Organization: Authorities and Works - Automatic Conversion from MARC to FRBR / Christian Monch, Trond Aalberg Information Retrieval in Different Application Areas Digital Preservation Indexing and Searching of Special Document and Collection Information
  10. Batley, S.: Classification in theory and practice (2005) 0.03
    0.031492937 = product of:
      0.047239404 = sum of:
        0.03586427 = weight(_text_:systematic in 1170) [ClassicSimilarity], result of:
          0.03586427 = score(doc=1170,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.1262953 = fieldWeight in 1170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.015625 = fieldNorm(doc=1170)
        0.011375134 = product of:
          0.022750268 = sum of:
            0.022750268 = weight(_text_:indexing in 1170) [ClassicSimilarity], result of:
              0.022750268 = score(doc=1170,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.11962087 = fieldWeight in 1170, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1170)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book examines a core topic in traditional librarianship: classification. Classification has often been treated as a sub-set of cataloguing and indexing with relatively few basic textbooks concentrating solely an the theory and practice of classifying resources. This book attempts to redress the balance somewhat. The aim is to demystify a complex subject, by providing a sound theoretical underpinning, together with practical advice and promotion of practical skills. The text is arranged into five chapters: Chapter 1: Classification in theory and practice. This chapter explores theories of classification in broad terms and then focuses an the basic principles of library classification, introducing readers to technical terminology and different types of classification scheme. The next two chapters examine individual classification schemes in depth. Each scheme is explained using frequent examples to illustrate basic features. Working through the exercises provided should be enjoyable and will enable readers to gain practical skills in using the three most widely used general library classification schemes: Dewey Decimal Classification, Library of Congress Classification and Universal Decimal Classification. Chapter 2: Classification schemes for general collections. Dewey Decimal and Library of Congress classifications are the most useful and popular schemes for use in general libraries. The background, coverage and structure of each scheme are examined in detail in this chapter. Features of the schemes and their application are illustrated with examples. Chapter 3: Classification schemes for specialist collections. Dewey Decimal and Library of Congress may not provide sufficient depth of classification for specialist collections. In this chapter, classification schemes that cater to specialist needs are examined. Universal Decimal Classification is superficially very much like Dewey Decimal, but possesses features that make it a good choice for specialist libraries or special collections within general libraries. It is recognised that general schemes, no matter how deep their coverage, may not meet the classification needs of some collections. An answer may be to create a special classification scheme and this process is examined in detail here. Chapter 4: Classifying electronic resources. Classification has been reborn in recent years with an increasing need to organise digital information resources. A lot of work in this area has been conducted within the computer science discipline, but uses basic principles of classification and thesaurus construction. This chapter takes a broad view of theoretical and practical issues involved in creating classifications for digital resources by examining subject trees, taxonomies and ontologies. Chapter 5: Summary. This chapter provides a brief overview of concepts explored in depth in previous chapters. Development of practical skills is emphasised throughout the text. It is only through using classification schemes that a deep understanding of their structure and unique features can be gained. Although all the major schemes covered in the text are available an the Web, it is recommended that hard-copy versions are used by those wishing to become acquainted with their overall structure. Recommended readings are supplied at the end of each chapter and provide useful sources of additional information and detail. Classification demands precision and the application of analytical skills, working carefully through the examples and the practical exercises should help readers to improve these faculties. Anyone who enjoys cryptic crosswords should recognise a parallel: classification often involves taking the meaning of something apart and then reassembling it in a different way.
    Footnote
    Rez. in: KO 31(2005), no.4, S.257-258 (B.H. Kwasnik): "According to the author, there have been many books that address the general topic of cataloging and indexing, but relatively few that focus solely an classification. This Compact and clearly written book promises to "redress the balance," and it does. From the outset the author identifies this as a textbook - one that provides theoretical underpinnings, but has as its main goal the provision of "practical advice and the promotion of practical skills" (p. vii). This is a book for the student, or for the practitioner who would like to learn about other applied bibliographic classification systems, and it considers classification as a pragmatic solution to a pragmatic problem: that of organizing materials in a collection. It is not aimed at classification researchers who study the nature of classification per se, nor at those whose primary interest is in classification as a manifestation of human cultural, social, and political values. Having said that, the author's systematic descriptions provide an exceptionally lucid and conceptually grounded description of the prevalent bibliographic classification schemes as they exist, and thus, the book Could serve as a baseline for further comparative analyses or discussions by anyone pursuing such investigations. What makes this book so appealing, even to someone who has immersed herself in this area for many years, as a practicing librarian, a teacher, and a researcher? I especially liked the conceptual framework that supported the detailed descriptions. The author defines and provides examples of the fundamental concepts of notation and the types of classifications, and then develops the notions of conveying order, brevity and simplicity, being memorable, expressiveness, flexibility and hospitality. These basic terms are then used throughout to analyze and comment an the classifications described in the various chapters: DDC, LCC, UDC, and some well-chosen examples of facetted schemes (Colon, Bliss, London Classification of Business Studies, and a hypothetical library of photographs).
  11. Kling, R.; Rosenbaum, H.; Sawyer, S.: Understanding and communicating social informatics : a framework for studying and teaching the human contexts of information and communication technologies (2005) 0.03
    0.030256119 = product of:
      0.045384176 = sum of:
        0.03586427 = weight(_text_:systematic in 3312) [ClassicSimilarity], result of:
          0.03586427 = score(doc=3312,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.1262953 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.015625 = fieldNorm(doc=3312)
        0.009519907 = product of:
          0.019039813 = sum of:
            0.019039813 = weight(_text_:22 in 3312) [ClassicSimilarity], result of:
              0.019039813 = score(doc=3312,freq=4.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.109432176 = fieldWeight in 3312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3312)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Classification
    303.48/33 22
    DDC
    303.48/33 22
    Footnote
    Throughout the book, the authors portray social informatics research as being underutilized and misunderstood outside the field, and they should be commended for acknowledging and addressing these problems head-on. Yes, there is resistance from ICT professionals and faculty and students in technical disciplines, most of whom have not been trained to consider social and institutional issues as part of their work. However, this stance sometimes results in a defensive tone. Social informatics research is repeatedly described as "systematic," "rigorous," and "empirically anchored," as if in preemptive response to doubts about the seriousness of social informatics scholarship. Chapter titles such as "Perceptions of the Relevance of Social Informatics Research" and "Raising the Profile of Social Informatics Research" contribute to this impression. Nonscholarly observers are dismissed as "pundits," and students who lack a social informatics perspective have "typically naïve" conceptualizations (p. 100). The concluding chapter ends not with a powerful and memorable synthesis, but with a final plea: "Taking Social Informatics Seriously." The content of the book is strong enough to stand on its own, but the manner in which it is presented sometimes detracts from the message. The book's few weaknesses can be viewed simply as the price of attempting both to survey social informatics research findings and to articulate their importance for such a diverse set of audiences, in such a brief volume. The central tension of the book, and the field of social informatics as a whole, is that on the one hand the particular-use context of an ICT is of critical importance, but furthering a social informatics agenda requires that some context-independent findings and tools be made evident to those outside the field. Understanding and Communicating Social Informatics is an important and worthwhile contribution toward reconciling this tension, and translating social informatics research findings into better real-world systems."
  12. Cleveland, D.B.; Cleveland, A.D.: Introduction to abstracting and indexing (2001) 0.03
    0.02997611 = product of:
      0.08992833 = sum of:
        0.08992833 = product of:
          0.17985666 = sum of:
            0.17985666 = weight(_text_:indexing in 316) [ClassicSimilarity], result of:
              0.17985666 = score(doc=316,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.94568604 = fieldWeight in 316, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.078125 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    LCSH
    Indexing
    RSWK
    Indexing / Abstracting (GBV)
    Subject
    Indexing / Abstracting (GBV)
    Indexing
  13. Dhyani, P.: Classifying with Dewey Decimal Classification (Ed. 19th and 20th) (2002) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 1473) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1473,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1473)
      0.33333334 = coord(1/3)
    
    Abstract
    Classifying wsith Dewey Decimal Classification (Editions 19 and 20) is a pathfinder for those who have problems in practical application of DDC and for those who are eager to reclassify their documents from Edition 19 to 20. It highlights the difference between Edition 19 and 20 and the new features in the Edition 20. The book has simplified the practical application and number building procedure by providing systematic and stepwise guidance. It vividly explains the way add device is applied, the working of seven tables and special tables and the classes where the classifier has the autonomy to assign class numbers to specific documents as per local needs of the library. The book is so planned that it helps in understanding easily the nuances of both the editions of DDC. Accordingly it is useful for library community in general and students, teachers and classifiers in particular.
  14. Raju, A.A.N.: Colon Classification: theory and practice : a self instructional manual (2001) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 1482) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1482,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1482)
      0.33333334 = coord(1/3)
    
    Abstract
    Colon Classification (CC) is truly the first freely faceted scheme for library classification devised and propagated by Dr. S.R. Ranganathan. The scheme is being taught in theory and practice to the students in most of the LIS schools in India and abroad also. Many manuals, Guide books and Introductory works have been published on CC in the past. But the present work tread a new path in presenting CC to the student, teaching and professional community. The present work Colon Classification: Theory and Practice; A Self Instructional Manual is the result of author's twenty-five years experience of teaching theory and practice of CC to the students of LIS. For the first ime concerted and systematic attempt has been made to present theory and practice of CC in self-instructional mode, keeping in view the requirements of students learners of Open Universities/ Distance Education Institutions in particular. The other singificant and novel features introduced in this manual are: Presenting the scope of each block consisting certain units bollowed by objectives, introduction, sections, sub-sections, self check exercises, glossary and assignment of each unit. It is hoped that all these features will help the users/readers of this manual to understand and grasp quickly, the intricacies involved in theory and practice of CC(6th Edition). The manual is presented in three blocks and twelve units.
  15. Lancaster, F.W.: Indexing and abstracting in theory and practice (2003) 0.03
    0.027863272 = product of:
      0.083589815 = sum of:
        0.083589815 = product of:
          0.16717963 = sum of:
            0.16717963 = weight(_text_:indexing in 4913) [ClassicSimilarity], result of:
              0.16717963 = score(doc=4913,freq=24.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.8790302 = fieldWeight in 4913, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4913)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Covers: indexing principles and practice; precoordinate indexes; consistency and quality of indexing; types and functions of abstracts; writing an abstract; evaluation theory and practice; approaches used in indexing and abstracting services; indexing enhancement; natural language in information retrieval; indexing and abstracting of imaginative works; databases of images and sound; automatic indexing and abstracting; the future of indexing and abstracting services
    LCSH
    Indexing
    Indexing / Problems, exercises, etc.
    Subject
    Indexing
    Indexing / Problems, exercises, etc.
  16. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.02470427 = product of:
      0.07411281 = sum of:
        0.07411281 = sum of:
          0.044964164 = weight(_text_:indexing in 150) [ClassicSimilarity], result of:
            0.044964164 = score(doc=150,freq=10.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.23642151 = fieldWeight in 150, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.029148644 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.029148644 = score(doc=150,freq=6.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.33333334 = coord(1/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  17. Ranganathan, S.R.: Colon Classification (Sixth Edition) (2007) 0.02
    0.023909515 = product of:
      0.07172854 = sum of:
        0.07172854 = weight(_text_:systematic in 1474) [ClassicSimilarity], result of:
          0.07172854 = score(doc=1474,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.2525906 = fieldWeight in 1474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=1474)
      0.33333334 = coord(1/3)
    
    Abstract
    THE COLON CLASSIFICATION is the latest scheme in the field of classification. It has revolutionised thinking in classification and stimulated research in it. This new method is suited to small and large, general and special libraries and can be used in classifying whole books as well as individual articles in a periodical or sections in a book.. It is being taught in all schools o Library Science all over the world, not only as a means of arranging books on shelves but also as a means of finding out the focus of a book in systematic way and finding the requirements of a reader while doing reference service. The new methodologies in classification invented as part of the Colon Classification-the Facet Analysis, the Phase Analysis and the Zone Analysis--have lifted Practical Classification from guesswork to scientific method. They are forming an important theme in international conferences on information retrieval.
  18. Beyond book indexing : how to get started in Web indexing, embedded indexing and other computer-based media (2000) 0.02
    0.023219395 = product of:
      0.06965818 = sum of:
        0.06965818 = product of:
          0.13931637 = sum of:
            0.13931637 = weight(_text_:indexing in 215) [ClassicSimilarity], result of:
              0.13931637 = score(doc=215,freq=24.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7325252 = fieldWeight in 215, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=215)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Are you curious about new indexing technologies? Would you like to develop and create innovative indexes that provide access to online resources, multimedia, or online help? Do you want to learn new skills and expand your marketing possibilities? This book provides an in-depth look at current and emerging computer-based technologies and offers suggestions for obtaining work in these fields. Extensive refrences and a glossary round out this informative and exciting new book
    Content
    Enthält die Beiträge: Part 1: Beyond stand-alone indexes: embedded indexing: WRIGHT; J.C.: The world of embedded indexing; MONCRIEF, L.: Indexing computer-related documents - Part 2: Beyond the book: Web indexing: WALKER, D.: Subject-oriented Web indexing; BROCCOLI, K. u. G.V. RAVENSWAAY: Web indexing - anchors away; MAISLIN, S.: Ripping out the pages; ROWLAND, M.J.: Plunging in: Creating a Web site index for an online newsletter - Part 3: Special topics in computer-based indexing: ROWLAND, M.J.: <Meta> tags; WOODS. X.B.: Envisioning the word: Multimedia CD-ROM indexing; HOLBERT, S.: How to index Windows-based online help - Part 4: Beyond traditional marketing - selling yourself in hyperspace: ROWLAND, M.J.: Web site design for indexers; RICE, R.: Putting sample indexes on your Web site; CONNOLLY, D.A.: The many uses of Email discussion lists
  19. Parekh, R.L.: Advanced indexing and abstracting practices (2000) 0.02
    0.02275027 = product of:
      0.068250805 = sum of:
        0.068250805 = product of:
          0.13650161 = sum of:
            0.13650161 = weight(_text_:indexing in 119) [ClassicSimilarity], result of:
              0.13650161 = score(doc=119,freq=16.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7177252 = fieldWeight in 119, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=119)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing and abstracting are not activities that should be looked upon as ends in themselves. It is the results of these activities that should be evaluated and this can only be done within the context of a particular database, whether in printed or machine-readable form. In this context, the indexing can be judged successful if it allows searchers to locate items they want without having to look at many they do not want. This book intended primarily as a text to be used in teaching indexing and abstracting of Library and information science. It is an immense value to all individuals and institutions involved in information retrieval and related activities, including librarians, managers of information centres and database producers.
    Content
    Inhalt: 1. Indexing and Abstracting 2. Automatic Indexing and Automatic Abstracting 3. Principles of Indexing 4.Periodicals Listing and Accessioning 5. Online Computer Service 6. Dialog, Searching and Bibliographic Display 7. Books 8. Bibliographic Control 9. Abstracting Functions 10. Acquisition System 11. Future of Indexing and Abstracting Services
  20. Giunti, M.C.: Soggettazione (2002) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 390) [ClassicSimilarity], result of:
              0.12869495 = score(doc=390,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 390, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=390)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. d. Titels: Alphabetical subject indexing

Authors

Languages

Types

  • s 70
  • i 4
  • b 1
  • el 1
  • n 1
  • More… Less…

Subjects

Classifications