Search (312 results, page 1 of 16)

  • × year_i:[2000 TO 2010}
  • × type_ss:"m"
  1. Kageura, K.: ¬The dynamics of terminology : a descriptive theory of term formation and terminological growth (2002) 0.08
    0.08313005 = product of:
      0.11084006 = sum of:
        0.005097042 = product of:
          0.020388167 = sum of:
            0.020388167 = weight(_text_:based in 1787) [ClassicSimilarity], result of:
              0.020388167 = score(doc=1787,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14414644 = fieldWeight in 1787, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.25 = coord(1/4)
        0.09779277 = weight(_text_:term in 1787) [ClassicSimilarity], result of:
          0.09779277 = score(doc=1787,freq=24.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.44646066 = fieldWeight in 1787, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1787)
        0.007950256 = product of:
          0.015900511 = sum of:
            0.015900511 = weight(_text_:22 in 1787) [ClassicSimilarity], result of:
              0.015900511 = score(doc=1787,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.09672529 = fieldWeight in 1787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The discovery of rules for the systematicity and dynamics of terminology creations is essential for a sound basis of a theory of terminology. This quest provides the driving force for the dynamics of terminology in which Dr Kageura demonstrates the interaction of these two factors on a specific corpus of Japanese terminology which, beyond the necessary linguistic circumstances, also has a model character for similar studies. His detailed examination of the relationships between terms and their constituent elements, the relationships among the constituent elements and the type of conceptual combinations used in the construction of the terminology permits deep insights into the systematic thought processes underlying term creation. To compensate for the inherent limitation of a purely descriptive analysis of conceptual patterns, Dr. Kageura offers a quantitative analysis of the patterns of the growth of terminology.
    Content
    PART I: Theoretical Background 7 Chapter 1. Terminology: Basic Observations 9 Chapter 2. The Theoretical Framework for the Study of the Dynamics of Terminology 25 PART II: Conceptual Patterns of Term Formation 43 Chapter 3. Conceptual Patterns of Term Formation: The Basic Descriptive Framework 45 Chapter 4. Conceptual Categories for the Description of Formation Patterns of Documentation Terms 61 Chapter 5. Intra-Term Relations and Conceptual Specification Patterns 91 Chapter 6. Conceptual Patterns of the Formation of Documentation Terms 115 PART III: Quantitative Patterns of Terminological Growth 163 Chapter 7. Quantitative Analysis of the Dynamics of Terminology: A Basic Framework 165 Chapter 8. Growth Patterns of Morphemes in the Terminology of Documentation 183 Chapter 9. Quantitative Dynamics in Term Formation 201 PART IV: Conclusions 247 Chapter 10. Towards Modelling Term Formation and Terminological Growth 249 Appendices 273 Appendix A. List of Conceptual Categories 275 Appendix B. Lists of Intra-Term Relations and Conceptual Specification Patterns 279 Appendix C. List of Terms by Conceptual Categories 281 Appendix D. List of Morphemes by Conceptual Categories 295.
    Date
    22. 3.2008 18:18:53
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.112-113 (L. Bowker): "Terminology is generally understood to be the activity that is concerned with the identification, collection and processing of terms; terms are the lexical items used to describe concepts in specialized subject fields Terminology is not always acknowledged as a discipline in its own right; it is sometimes considered to be a subfield of related disciplines such as lexicography or translation. However, a growing number of researchers are beginning to argue that terminology should be recognized as an autonomous discipline with its own theoretical underpinnings. Kageura's book is a valuable contribution to the formulation of a theory of terminology and will help to establish this discipline as an independent field of research. The general aim of this text is to present a theory of term formation and terminological growth by identifying conceptual regularities in term creation and by laying the foundations for the analysis of terminological growth patterns. The approach used is a descriptive one, which means that it is based an observations taken from a corpus. It is also synchronic in nature and therefore does not attempt to account for the evolution of terms over a given period of time (though it does endeavour to provide a means for predicting possible formation patterns of new terms). The descriptive, corpus-based approach is becoming very popular in terminology circles; however, it does pose certain limitations. To compensate for this, Kageura complements his descriptive analysis of conceptual patterns with a quantitative analysis of the patterns of the growth of terminology. Many existing investigations treat only a limited number of terms, using these for exemplification purposes. Kageura argues strongly (p. 31) that any theory of terms or terminology must be based an the examination of the terminology of a domain (i.e., a specialized subject field) in its entirety since it is only with respect to an individual domain that the concept of "term" can be established. To demonstrate the viability of his theoretical approach, Kageura has chosen to investigate and describe the domain of documentation, using Japanese terminological data. The data in the corpus are derived from a glossary (Wersig and Neveling 1984), and although this glossary is somewhat outdated (a fact acknowledged by the author), the data provided are nonetheless sufficient for demonstrating the viability of the approach, which can later be extended and applied to other languages and domains.
    Unlike some terminology researchers, Kageura has been careful not to overgeneralize the applicability of his work, and he points out the limitations of his study, a number of which are summarized an pages 254-257. For example, Kageura acknowledges that his contribution should properly be viewed as a theory of term formation and terminological growth in the field of documentation Moreover, Kageura notes that this study does not distinguish the general part and the domaindependent part of the conceptual system, nor does it fully explore the multidimensionality of the viewpoints of conceptual categorization. Kageura's honesty with regard to the complexity of terminological issues and the challenges associated with the formation of a theory of terminology is refreshing since too often in the past, the results of terminology research have been somewhat naively presented as being absolutely clearcut and applicable in all situations."
  2. Bruce, H.: ¬The user's view of the Internet (2002) 0.07
    0.06993598 = sum of:
      0.003058225 = product of:
        0.0122329 = sum of:
          0.0122329 = weight(_text_:based in 4344) [ClassicSimilarity], result of:
            0.0122329 = score(doc=4344,freq=6.0), product of:
              0.14144066 = queryWeight, product of:
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.04694356 = queryNorm
              0.08648786 = fieldWeight in 4344, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.25 = coord(1/4)
      0.023954237 = weight(_text_:term in 4344) [ClassicSimilarity], result of:
        0.023954237 = score(doc=4344,freq=4.0), product of:
          0.21904005 = queryWeight, product of:
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.04694356 = queryNorm
          0.10936008 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.03815336 = weight(_text_:frequency in 4344) [ClassicSimilarity], result of:
        0.03815336 = score(doc=4344,freq=4.0), product of:
          0.27643865 = queryWeight, product of:
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.04694356 = queryNorm
          0.13801746 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.0047701527 = product of:
        0.0095403055 = sum of:
          0.0095403055 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
            0.0095403055 = score(doc=4344,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.058035173 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST. 54(2003) no.9, S.906-908 (E.G. Ackermann): "In this book Harry Bruce provides a construct or view of "how and why people are using the Internet," which can be used "to inform the design of new services and to augment our usings of the Internet" (pp. viii-ix; see also pp. 183-184). In the process, he develops an analytical tool that I term the Metatheory of Circulating Usings, and proves an impressive distillation of a vast quantity of research data from previous studies. The book's perspective is explicitly user-centered, as is its theoretical bent. The book is organized into a preface, acknowledgments, and five chapters (Chapter 1, "The Internet Story;" Chapter 2, "Technology and People;" Chapter 3, "A Focus an Usings;" Chapter 4, "Users of the Internet;" Chapter 5, "The User's View of the Internet"), followed by an extensive bibliography and short index. Any notes are found at the end of the relevant Chapter. The book is illustrated with figures and tables, which are clearly presented and labeled. The text is clearly written in a conversational style, relatively jargon-free, and contains no quantification. The intellectual structure follows that of the book for the most part, with some exceptions. The definition of several key concepts or terms are scattered throughout the book, often appearing much later after extensive earlier use. For example, "stakeholders" used repeatedly from p. viii onward, remains undefined until late in the book (pp. 175-176). The study's method is presented in Chapter 3 (p. 34), relatively late in the book. Its metatheoretical basis is developed in two widely separated places (Chapter 3, pp. 56-61, and Chapter 5, pp. 157-159) for no apparent reason. The goal or purpose of presenting the data in Chapter 4 is explained after its presentation (p. 129) rather than earlier with the limits of the data (p. 69). Although none of these problems are crippling to the book, it does introduce an element of unevenness into the flow of the narrative that can confuse the reader and unnecessarily obscures the author's intent. Bruce provides the contextual Background of the book in Chapter 1 (The Internet Story) in the form of a brief history of the Internet followed by a brief delineation of the early popular views of the Internet as an information superstructure. His recapitulation of the origins and development of the Internet from its origins as ARPANET in 1957 to 1995 touches an the highlights of this familiar story that will not be retold here. The early popular views or characterizations of the Internet as an "information society" or "information superhighway" revolved primarily around its function as an information infrastructure (p. 13). These views shared three main components (technology, political values, and implied information values) as well as a set of common assumptions. The technology aspect focused an the Internet as a "common ground an which digital information products and services achieve interoperability" (p. 14). The political values provided a "vision of universal access to distributed information resources and the benefits that this will bring to the lives of individual people and to society in general" (p. 14). The implied communication and information values portrayed the Internet as a "medium for human creativity and innovation" (p. 14). These popular views also assumed that "good decisions arise from good information," that "good democracy is based an making information available to all sectors of society," and that "wisdom is the by-product of effective use of information" (p. 15). Therefore, because the Internet is an information infrastructure, it must be "good and using the Internet will benefit individuals and society in general" (p. 15).
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).
    Bruce points out that Dervin is also concerned about how "we look at the world" in terms of "information needs and seeking" (p.60). She maintains that information scientists traditionally view information seeking and needs in terms of "contexts, users, and systems." Dervin questions whether or not, from a user's point of view, these three "points of interest" even exist. Rather it is the "micromoments of human usings" [emphasis original], and the "world viewings, seekings, and valuings" that comprise them that are real (p. 60). Using his metatheory, Bruce represents the Internet, the "object" of study, as a "chain of transformations made up of the micromoments of human usings" (p. 60). The Internet then is a "composite of usings" that, through research and study, is continuously reduced in complexity while its "essence" and "explanation" are amplified (p. 60). Bruce plans to use the Metatheory of Circulating Usings as an analytical "lens" to "tease out a characterization of the micromoments of Internet usings" from previous research an the Internet thereby exposing "the user's view of the Internet" (pp. 60-61). In Chapter 4 (Users of the Internet), Bruce presents the research data for the study. He begins with an explanation of the limits of the data, and to a certain extent, the study itself. The perspective is that of the Internet user, with a focus an use, not nonuse, thereby exluding issues such as the digital divide and universal service. The research is limited to Internet users "in modern economies around the world" (p. 60). The data is a synthesis of research from many disciplines, but mainly from those "associated with the information field" with its traditional focus an users, systems, and context rather than usings (p. 70). Bruce then presents an extensive summary of the research results from a massive literature review of available Internet studies. He examines the research for each study group in order of the amount of data available, starting with the most studied group professional users ("academics, librarians, and teachers") followed by "the younger generation" ("College students, youths, and young adults"), users of e-government information and e-business services, and ending with the general public (the least studied group) (p. 70). Bruce does a masterful job of condensing and summarizing a vast amount of research data in 49 pages. Although there is too muck to recapitulate here, one can get a sense of the results by looking at the areas of data examined for one of the study groups: academic Internet users. There is data an their frequency of use, reasons for nonuse, length of use, specific types of use (e.g., research, teaching, administration), use of discussion lists, use of e-journals, use of Web browsers and search engines, how academics learn to use web tools and services (mainly by self-instruction), factors affecting use, and information seeking habits. Bruce's goal in presenting all this research data is to provide "the foundation for constructs of the Internet that can inform stakeholders who will play a role in determining how the Internet will develop" (p. 129). These constructs are presented in Chapter 5.
    Bruce begins Chapter 5 (The Users' View of the Internet) by pointing out that the Internet not only exists as a physical entity of hardware, software, and networked connectivity, but also as a mental representation or knowledge structure constructed by users based an their usings. These knowledge structures or constructs "allow people to interpret and make sense of things" by functioning as a link between the new unknown thing with known thing(s) (p. 158). The knowledge structures or using constructs are continually evolving as people use the Internet over time, and represent the user's view of the Internet. To capture the users' view of the Internet from the research literature, Bruce uses his Metatheory of Circulating Usings. He recapitulates the theory, casting it more closely to the study of Internet use than previously. Here the reduction component provides a more detailed "understanding of the individual users involved in the micromoment of Internet using" while simultaneously the amplification component increases our understanding of the "generalized construct of the Internet" (p. 158). From this point an Bruce presents a relatively detail users' view of the Internet. He starts with examining Internet usings, which is composed of three parts: using space, using literacies, and Internet space. According to Bruce, using space is a using horizon likened to a "sphere of influence," comfortable and intimate, in which an individual interacts with the Internet successfully (p. 164). It is a "composite of individual (professional nonwork) constructs of Internet utility" (p. 165). Using literacies are the groups of skills or tools that an individual must acquire for successful interaction with the Internet. These literacies serve to link the using space with the Internet space. They are usually self-taught and form individual standards of successful or satisfactory usings that can be (and often are) at odds with the standards of the information profession. Internet space is, according to Bruce, a user construct that perceives the Internet as a physical, tangible place separate from using space. Bruce concludes that the user's view of the Internet explains six "principles" (p. 173). "Internet using is proof of concept" and occurs in contexts; using space is created through using frequency, individuals use literacies to explore and utilize Internet space, Internet space "does not require proof of concept, and is often influence by the perceptions and usings of others," and "the user's view of the Internet is upbeat and optimistic" (pp. 173-175). He ends with a section describing who are the Internet stakeholders. Bruce defines them as Internet hardware/software developers, Professional users practicing their profession in both familiar and transformational ways, and individuals using the Internet "for the tasks and pleasures of everyday life" (p. 176).
  3. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.07
    0.06586234 = product of:
      0.08781646 = sum of:
        0.004161717 = product of:
          0.016646868 = sum of:
            0.016646868 = weight(_text_:based in 1833) [ClassicSimilarity], result of:
              0.016646868 = score(doc=1833,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11769507 = fieldWeight in 1833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1833)
          0.25 = coord(1/4)
        0.028230337 = weight(_text_:term in 1833) [ClassicSimilarity], result of:
          0.028230337 = score(doc=1833,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.12888208 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1833)
        0.055424403 = sum of:
          0.039523892 = weight(_text_:assessment in 1833) [ClassicSimilarity], result of:
            0.039523892 = score(doc=1833,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.15249807 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
          0.015900511 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.015900511 = score(doc=1833,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.09672529 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
      0.75 = coord(3/4)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)
    Footnote
    Rez. in JASIST 57(2006) no.10, S.1412-1413 (R. W. White): "Information Retrieval Design is a textbook that aims to foster the intelligent user-centered design of databases for Information Retrieval (IR). The book outlines a comprehensive set of 20 factors. chosen based on prior research and the authors' experiences. that need to he considered during the design process. The authors provide designers with information on those factors to help optimize decision making. The book does not cover user-needs assessment, implementation of IR databases, or retries al systems, testing. or evaluation. Most textbooks in IR do not offer a substantive walkthrough of the design factors that need to be considered Mien des eloping IR databases. Instead. they focus on issues such as the implementation of data structures, the explanation of search algorithms, and the role of human-machine interaction in the search process. The book touches on all three, but its focus is on designing databases that can be searched effectively. not the tools to search them. This is an important distinction: despite its title. this book does not describe how to build retrieval systems. Professor Anderson utilizes his wealth of experience in cataloging and classification to bring a unique perspective on IR database design that may be useful for novices. for developers seeking to make sense of the design process, and for students as a text to supplement classroom tuition. The foreword and preface. by Jessica Milstead and James Anderson. respectively, are engaging and worthwhile reading. It is astounding that it has taken some 20 years for anyone to continue the stork of Milstead and write as extensively as Anderson does about such an important issue as IR database design. The remainder of the book is divided into two parts: Introduction and Background Issues and Design Decisions. Part 1 is a reasonable introduction and includes a glossary of the terminology that authors use in the book. It is very helpful to have these definitions early on. but the subject descriptors in the right margin are distracting and do not serve their purpose as access points to the text. The terminology is useful to have. as the authors definitions of concepts do not lit exactly with what is traditionally accepted in IR. For example. they use the term 'message' to icier to what would normally be called .'document" or "information object." and do not do a good job at distinguishing between "messages" and "documentary units". Part 2 describes components and attributes of 1R databases to help designers make design choices. The book provides them with information about the potential ramifications of their decisions and advocates a user-oriented approach to making them. Chapters are arranged in a seemingly sensible order based around these factors. and the authors remind us of the importance of integrating them. The authors are skilled at selecting the important factors in the development of seemingly complex entities, such as IR databases: how es er. the integration of these factors. or the interaction between them. is not handled as well as perhaps should be. Factors are presented in the order in which the authors feel then should be addressed. but there is no chapter describing how the factors interact. The authors miss an opportunity at the beginning of Part 2 where they could illustrate using a figure the interactions between the 20 factors they list in a way that is not possible with the linear structure of the book.
  4. Spitzer, K.L.; Eisenberg, M.B.; Lowe, C.A.: Information literacy : essential skills for the information age (2004) 0.06
    0.06028825 = product of:
      0.08038434 = sum of:
        0.004161717 = product of:
          0.016646868 = sum of:
            0.016646868 = weight(_text_:based in 3686) [ClassicSimilarity], result of:
              0.016646868 = score(doc=3686,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11769507 = fieldWeight in 3686, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 3686) [ClassicSimilarity], result of:
          0.056460675 = score(doc=3686,freq=8.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 3686, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3686)
        0.019761946 = product of:
          0.039523892 = sum of:
            0.039523892 = weight(_text_:assessment in 3686) [ClassicSimilarity], result of:
              0.039523892 = score(doc=3686,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.15249807 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: JASIST 56(2005) no.9, S.1008-1009 (D.E. Agosto): "This second edition of Information Literacy: Essential Skills for the Information Age remains true to the first edition (published in 1998). The main changes involved the updating of educational standards discussed in the text, as well as the updating of the term history. Overall, this book serves as a detailed definition of the concept of information literacy and focuses heavily an presenting and discussing related state and national educational standards and policies. It is divided into 10 chapters, many of which contain examples of U.S. and international information literacy programs in a variety of educational settings. Chapter one offers a detailed definition of information literacy, as well as tracing the deviation of the term. The term was first introduced in 1974 by Paul Zurkowski in a proposal to the national Commission an Libraries and Information Science. Fifteen years later a special ALA committee derived the now generally accepted definition: "To be information literate requires a new set of skills. These include how to locate and use information needed for problem-solving and decision-making efficiently and effectively" (American Library Association, 1989, p. 11). Definitions for a number of related concepts are also offered, including definitions for visual literacy, media literacy, computer literacy, digital literacy, and network literacy. Although the authors do define these different subtypes of information literacy, they sidestep the argument over the definition of the more general term literacy, consequently avoiding the controversy over national and world illiteracy rates. Regardless of the actual rate of U.S. literacy (which varies radically with each different definition of "literacy"), basic literacy, i.e., basic reading and writing skills, still presents a formidable educational goal in the U.S. In fact, More than 5 million high-schoolers do not read well enough to understand their textbooks or other material written for their grade level. According to the National Assessment of Educational Progress, 26% of these students cannot read material many of us world deem essential for daily living, such as road signs, newspapers, and bus schedules. (Hock & Deshler, 2003, p. 27)
    Chapter two delves more deeply into the historical evolution of the concept of information literacy, and chapter three summarizes selected information literacy research. Researchers generally agree that information literacy is a process, rather than a set of skills to be learned (despite the unfortunate use of the word "skills" in the ALA definition). Researchers also generally agree that information literacy should be taught across the curriculum, as opposed to limiting it to the library or any other single educational context or discipline. Chapter four discusses economic ties to information literacy, suggesting that countries with information literate populations will better succeed economically in the current and future information-based world economy. A recent report issued by the Basic Education Coalition, an umbrella group of 19 private and nongovernmental development and relief organizations, supports this claim based an meta-analysis of large bodies of data collected by the World Bank, the United Nations, and other international organizations. Teach a Child, Transform a Nation (Basic Education Coalition, 2004) concluded that no modern nation has achieved sustained economic growth without providing near universal basic education for its citizens. It also concluded that countries that improve their literacy rates by 20 to 30% sec subsequent GDP increases of 8 to 16%. In light of the Coalition's finding that one fourth of adults in the world's developing countries are unable to read or write, the goal of worldwide information literacy seems sadly unattainable for the present, a present in which even universal basic literacy is still a pipedream. Chapter live discusses information literacy across the curriculum as an interpretation of national standards. The many examples of school and university information literacy programs, standards, and policies detailed throughout the volume world be very useful to educators and administrators engaging in program planning and review. For example, the authors explain that economics standards included in the Goals 2000: Educate America Act are comprised of 20 benchmark content standards. They quote a two-pronged grade 12 benchmark that first entails students being able to discuss how a high school senior's working 20 hours a week while attending school might result in a reduced overall lifetime income, and second requires students to be able to describe how increasing the federal minimum wage might result in reduced income for some workers. The authors tie this benchmark to information literacy as follows: "Economic decision making requires complex thinking skills because the variables involved are interdependent.
  5. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.04
    0.044085447 = product of:
      0.08817089 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 119) [ClassicSimilarity], result of:
              0.033293735 = score(doc=119,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=119)
          0.25 = coord(1/4)
        0.07984746 = weight(_text_:term in 119) [ClassicSimilarity], result of:
          0.07984746 = score(doc=119,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.3645336 = fieldWeight in 119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
      0.5 = coord(2/4)
    
    Abstract
    In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
  6. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.04
    0.04147133 = product of:
      0.08294266 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 4497) [ClassicSimilarity], result of:
              0.018833783 = score(doc=4497,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 4497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.25 = coord(1/4)
        0.07823421 = weight(_text_:term in 4497) [ClassicSimilarity], result of:
          0.07823421 = score(doc=4497,freq=6.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.35716853 = fieldWeight in 4497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
      0.5 = coord(2/4)
    
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  7. Rockman, I.F.: Strengthening connections between information literacy, general education, and assessment efforts (2002) 0.04
    0.0385312 = product of:
      0.0770624 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 45) [ClassicSimilarity], result of:
              0.039952483 = score(doc=45,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 45, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=45)
          0.25 = coord(1/4)
        0.06707428 = product of:
          0.13414855 = sum of:
            0.13414855 = weight(_text_:assessment in 45) [ClassicSimilarity], result of:
              0.13414855 = score(doc=45,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.51759565 = fieldWeight in 45, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=45)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Academic librarians have a long and rich tradition of collaborating with discipline-based faculty members to advance the mission and goals of the library. Included in this tradition is the area of information literacy, a foundation skill for academic success and a key component of independent, lifelong learning. With the rise of the general education reform movement on many campuses resurfacing in the last decade, libraries have been able to move beyond course-integrated library instruction into a formal planning role for general education programmatic offerings. This article shows the value of 1. strategic alliances, developed over time, to establish information literacy as a foundation for student learning; 2. strong partnerships within a multicampus higher education system to promote and advance information literacy efforts; and 3. assessment as a key component of outcomes-based information literacy activities.
  8. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.03
    0.03457108 = product of:
      0.06914216 = sum of:
        0.005264202 = product of:
          0.021056809 = sum of:
            0.021056809 = weight(_text_:based in 664) [ClassicSimilarity], result of:
              0.021056809 = score(doc=664,freq=10.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.1488738 = fieldWeight in 664, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.25 = coord(1/4)
        0.06387796 = weight(_text_:term in 664) [ClassicSimilarity], result of:
          0.06387796 = score(doc=664,freq=16.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.29162687 = fieldWeight in 664, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.015625 = fieldNorm(doc=664)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Knowledge organization 34(2007) no.1, S.58-60 (P. Buizza): "This Nuovo soggettario is the first sign of subject indexing renewal in Italy. Italian subject indexing has been based until now on Soggettario per i cataloghi delle biblioteche italiane (Firenze, 1956), a list of preferred terms and see references, with suitable hierarchical subdivisions and cross references, derived from the subject catalogue of the National Library in Florence (BNCF). New headings later used in Bibliografia nazionale italiana (BNI) were added without references, nor indeed with any real maintenance. Systematic instructions on how to combine the terms are lacking: the indexer using this instrument is obliged to infer the order of terms absent from the lists by consulting analogous entries. Italian libraries are suffering from the limits of this subject catalogue: vocabulary is inadequate, obsolete and inconsistent, the syndetic structure incomplete and inaccurate, and the syntax ill-defined, poorly explained and unable to reflect complex subjects. In the nineties, the Subject Indexing Research Group (Gruppo di ricerca sull'indicizzazione per soggetto, GRIS) of the AIB (Italian Library Association) developed the indexing theory and some principles of PRECIS and drew up guidelines based on consistent principles for vocabulary, semantic relationships and subject string construction, the latter according to role syntax (Guida 1997). In overhauling the Soggettario, the National Library in Florence aimed at a comprehensive indexing system. (A report on the method and evolution of the work has been published in Knowledge Organization (Lucarelli 2005), while the feasibility study is available in Italian (Per un nuovo Soggettario 2002). Any usable terms from the old Soggettario will be transferred to the new system, while taking into consideration international norms and interlinguistic compatibility, as well as applications outside the immediate library context. The terms will be accessible via a suitable OPAC operating on the most advanced software.
    The guide Nuovo soggettario was presented on February 8' 2007 at a one-day seminar in the Palazzo Vecchio, Florence, in front of some 500 spellbound people. The Nuovo soggettario comes in two parts: the guide in book-form and an accompanying CD-ROM, by way of which a prototype of the thesaurus may be accessed on the Internet. In the former, rules are stated; the latter contains a pdf version of the guide and the first installment of the controlled vocabulary, which is to be further enriched and refined. Syntactic instructions (general application guidelines, as well as special annotations of particular terms) and the compiled subject strings file have yet to be added. The essentials of the new system are: 1) an analytic-synthetic approach, 2) use of terms (units of controlled vocabulary) and subject strings (which represent subjects by combining terms in linear order to form syntactic relationships), instead of main headings and subdivisions, 3) specificity of terms and strings, with a view to the co-extension of subject string and subject matter and 4) a clear distinction between semantic and syntactic relationships, with full control of them both. Basic features of the vocabulary include the uniformity and univocality of terms and thesaural management of a priori (semantic) relationships. Starting from its definition, each term can be categorially analyzed: four macro-categories are represented (agents, action, things, time), for which there are subcategories called facets (e.g., for actions: activities, disciplines, processes), which in turn have sub-facets. Morphological instructions conform to national and international standards, including BS 8723, ANSI/ NISO Z39.19 and the IFLA draft of Guidelines for multilingual thesauri, even for syntactic factorization. Different kinds of semantic relationships are represented thoroughly, and particular attention is paid to poly-hierarchies, which are used only in moderation: both top terms must actually be relevant. Node labels are used to specify the principle of division applied. Instance relationships are also used.
    An entry is structured so as to present all the essential elements of the indexing system. For each term are given: category, facet, related terms, Dewey interdisciplinary class number and, if necessary; definition or scope notes. Sources used are referenced (an appendix in the book lists those used in the current work). Historical notes indicate whenever a change of term has occurred, thus smoothing the transition from the old lists. In chapter 5, the longest one, detailed instructions with practical examples show how to create entries and how to relate terms; upper relationships must always be complete, right up to the top term, whereas hierarchies of related terms not yet fully developed may remain unfinished. Subject string construction consists in a double operation: analysis and synthesis. The former is the analysis of logical functions performed by single concepts in the definition of the subject (e.g., transitive actions, object, agent, etc.) or in syntactic relationships (transitive relationships and belonging relationship), so that each term for those concepts is assigned its role (e.g., key concept, transitive element, agent, instrument, etc.) in the subject string, where the core is distinct from the complementary roles (e.g., place, time, form, etc.). Synthesis is based on a scheme of nuclear and complementary roles, and citation order follows agreed-upon principles of one-to-one relationships and logical dependence. There is no standard citation order based on facets, in a categorial logic, but a flexible one, although thorough. For example, it is possible for a time term (subdivision) to precede an action term, when the former is related to the latter as the object of action: "Arazzi - Sec. 16.-17. - Restauro" [Tapestry - 16th-17th century - Restoration] (p. 126). So, even with more complex subjects, it is possible to produce perfectly readable strings covering the whole of the subject matter without splitting it into two incomplete and complementary headings. To this end, some unusual connectives are adopted, giving the strings a more discursive style.
    Thesaurus software is based on AgroVoc (http:// www.fao.org/aims/ag_intro.htm) provided by the FAO, but in modified form. Many searching options and contextualization within the full hierarchies are possible, so that the choice of morphology and syntax of terms and strings is made easier by the complete overview of semantic relationships. New controlled terms will be available soon, thanks to the work in progress - there are now 13,000 terms, of which 40 percent are non-preferred. In three months, free Internet access by CD-ROM will cease and a subscription will be needed. The digital version of old Soggettario and the corresponding unstructured lists of headings adopted in 1956-1985 are accessible together with the thesaurus, so that the whole vocabulary, old and new, will be at the fingertips of the indexer, who is forced to work with both tools during this transition period. In the future, it will be possible to integrate the thesaurus into library OPACs. The two parts form a very consistent and detailed resource. The guide is filled with examples; the accurate, clearly-expressed and consistent instructions are further enhanced by good use of fonts and type size, facilitating reading. The thesaurus is simple and quick to use, very rich, albeit only a prototype; see, for instance, a list of DDC numbers and related terms with their category and facet, and then entries, hierarchies and so on, and the capacity of the structure to show organized knowledge. The excellent outcome of a demanding experimentation, the intended guide welcomes in a new era of subject indexing in Italy and is highly recommended. The new method has been designed to be easily teachable to new and experimented indexers.
    Now BNI is beginning to use the new language, pointing the way for the adoption of Nuovo soggettario in Italian libraries: a difficult challenge whose success is not assured. To name only one issue: including all fields of study requires particular care in treating terms with different specialized meanings; cooperation of other libraries and institutions is foreseen. At the same time, efforts are being made to assure the system's interoperability outside the library world. It is clear that a great commitment is required. "Too complex a system!" say the naysayers. "Only at the beginning," the proponents reply. The new system goes against the mainstream, compared with the imitation of the easy way offered by search engines - but we know that they must enrich their devices to improve quality, just repeating the work on semantic and syntactic relationships that leads formal expressions to the meanings they are intended to communicate - and also compared with research to create automated devices supporting human work, for the need to simplify cataloguing. Here AI is not involved, but automation is widely used to facilitate and to support the conscious work of indexers guided by rules as clear as possible. The advantage of Nuovo soggettario is its combination of a thesaurus (a much-appreciated tool used across the world) with the equally widespread technique of subject-string construction, which is to say: the rational and predictable combination of the terms used. The appearance of this original, unparalleled working model may well be a great occasion in the international development of indexing, as, on one hand, the Nuovo soggettario uses a recognized tool (the thesaurus) and, on the other, by permitting both pre-coordination and post-coordination, it attempts to overcome the fragmentation of increasingly complex and specialized subjects into isolated, single-term descriptors. This is a serious proposition that merits consideration from both theoretical and practical points of view - and outside Italy, too."
  9. Jarke, M.; Lenzerini, M.; Vassiliou, Y.; Vassiliadis, PO.: Fundamentals of data warehousing (2003) 0.03
    0.031786613 = product of:
      0.06357323 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 1304) [ClassicSimilarity], result of:
              0.03295912 = score(doc=1304,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 1304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1304)
          0.25 = coord(1/4)
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 1304) [ClassicSimilarity], result of:
              0.11066689 = score(doc=1304,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 1304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1304)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Data warehousing has captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains as an art rather than a science. This book presents the first comparative review of the state of the art and best current practice in data warehousing. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European DWQ project, it offers a conceptual framework by which the architecture and quality of datawarehousing efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence
  10. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.03
    0.02947553 = product of:
      0.05895106 = sum of:
        0.0049940604 = product of:
          0.019976242 = sum of:
            0.019976242 = weight(_text_:based in 3578) [ClassicSimilarity], result of:
              0.019976242 = score(doc=3578,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14123408 = fieldWeight in 3578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3578)
          0.25 = coord(1/4)
        0.053957 = weight(_text_:frequency in 3578) [ClassicSimilarity], result of:
          0.053957 = score(doc=3578,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.19518617 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.5 = coord(2/4)
    
    Content
    INHALT: Chris Biemann/Rainer Osswald: Automatische Erweiterung eines semantikbasierten Lexikons durch Bootstrapping auf großen Korpora - Ernesto William De Luca/Andreas Nürnberger: Supporting Mobile Web Search by Ontology-based Categorization - Rüdiger Gleim: HyGraph - Ein Framework zur Extraktion, Repräsentation und Analyse webbasierter Hypertextstrukturen - Felicitas Haas/Bernhard Schröder: Freges Grundgesetze der Arithmetik: Dokumentbaum und Formelwald - Ulrich Held/ Andre Blessing/Bettina Säuberlich/Jürgen Sienel/Horst Rößler/Dieter Kopp: A personalized multimodal news service -Jürgen Hermes/Christoph Benden: Fusion von Annotation und Präprozessierung als Vorschlag zur Behebung des Rohtextproblems - Sonja Hüwel/Britta Wrede/Gerhard Sagerer: Semantisches Parsing mit Frames für robuste multimodale Mensch-Maschine-Kommunikation - Brigitte Krenn/Stefan Evert: Separating the wheat from the chaff- Corpus-driven evaluation of statistical association measures for collocation extraction - Jörn Kreutel: An application-centered Perspective an Multimodal Dialogue Systems - Jonas Kuhn: An Architecture for Prallel Corpusbased Grammar Learning - Thomas Mandl/Rene Schneider/Pia Schnetzler/Christa Womser-Hacker: Evaluierung von Systemen für die Eigennamenerkennung im crosslingualen Information Retrieval - Alexander Mehler/Matthias Dehmer/Rüdiger Gleim: Zur Automatischen Klassifikation von Webgenres - Charlotte Merz/Martin Volk: Requirements for a Parallel Treebank Search Tool - Sally YK. Mok: Multilingual Text Retrieval an the Web: The Case of a Cantonese-Dagaare-English Trilingual e-Lexicon -
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
    Karel Pala: The Balkanet Experience - Peter M. Kruse/Andre Nauloks/Dietmar Rösner/Manuela Kunze: Clever Search: A WordNet Based Wrapper for Internet Search Engines - Rosmary Stegmann/Wolfgang Woerndl: Using GermaNet to Generate Individual Customer Profiles - Ingo Glöckner/Sven Hartrumpf/Rainer Osswald: From GermaNet Glosses to Formal Meaning Postulates -Aljoscha Burchardt/ Katrin Erk/Anette Frank: A WordNet Detour to FrameNet - Daniel Naber: OpenThesaurus: ein offenes deutsches Wortnetz - Anke Holler/Wolfgang Grund/Heinrich Petith: Maschinelle Generierung assoziativer Termnetze für die Dokumentensuche - Stefan Bordag/Hans Friedrich Witschel/Thomas Wittig: Evaluation of Lexical Acquisition Algorithms - Iryna Gurevych/Hendrik Niederlich: Computing Semantic Relatedness of GermaNet Concepts - Roland Hausser: Turn-taking als kognitive Grundmechanik der Datenbanksemantik - Rodolfo Delmonte: Parsing Overlaps - Melanie Twiggs: Behandlung des Passivs im Rahmen der Datenbanksemantik- Sandra Hohmann: Intention und Interaktion - Anmerkungen zur Relevanz der Benutzerabsicht - Doris Helfenbein: Verwendung von Pronomina im Sprecher- und Hörmodus - Bayan Abu Shawar/Eric Atwell: Modelling turn-taking in a corpus-trained chatbot - Barbara März: Die Koordination in der Datenbanksemantik - Jens Edlund/Mattias Heldner/Joakim Gustafsson: Utterance segmentation and turn-taking in spoken dialogue systems - Ekaterina Buyko: Numerische Repräsentation von Textkorpora für Wissensextraktion - Bernhard Fisseni: ProofML - eine Annotationssprache für natürlichsprachliche mathematische Beweise - Iryna Schenk: Auflösung der Pronomen mit Nicht-NP-Antezedenten in spontansprachlichen Dialogen - Stephan Schwiebert: Entwurf eines agentengestützten Systems zur Paradigmenbildung - Ingmar Steiner: On the analysis of speech rhythm through acoustic parameters - Hans Friedrich Witschel: Text, Wörter, Morpheme - Möglichkeiten einer automatischen Terminologie-Extraktion.
  11. Information science in transition (2009) 0.03
    0.029342528 = product of:
      0.03912337 = sum of:
        0.0029427784 = product of:
          0.011771114 = sum of:
            0.011771114 = weight(_text_:based in 634) [ClassicSimilarity], result of:
              0.011771114 = score(doc=634,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.083222985 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.25 = coord(1/4)
        0.028230337 = weight(_text_:term in 634) [ClassicSimilarity], result of:
          0.028230337 = score(doc=634,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.12888208 = fieldWeight in 634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.007950256 = product of:
          0.015900511 = sum of:
            0.015900511 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
              0.015900511 = score(doc=634,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.09672529 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Date
    22. 2.2013 11:35:35
  12. Henderson, L.; Tallman, J.I.: Stimulated recall and mental models : tools for teaching and learning computer information literacy (2006) 0.03
    0.027613988 = product of:
      0.055227976 = sum of:
        0.0029427784 = product of:
          0.011771114 = sum of:
            0.011771114 = weight(_text_:based in 1717) [ClassicSimilarity], result of:
              0.011771114 = score(doc=1717,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.083222985 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1717)
          0.25 = coord(1/4)
        0.052285198 = product of:
          0.104570396 = sum of:
            0.104570396 = weight(_text_:assessment in 1717) [ClassicSimilarity], result of:
              0.104570396 = score(doc=1717,freq=14.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.403472 = fieldWeight in 1717, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.456-457 (D. Cook): "In February 2006, the Educational Testing Service (ETS) announced the release of its brand new core academic assessment of its Information and Communication Technology (ICT) Literacy Assessment. The core assessment is designed to assess the information literacy of high school students transitioning to higher education. Many of us already know ETS for some of its other assessment tools like the SAT and GRE. But ETS's latest test comes on the heels of its 2005 release of an advanced level of its ICT Literacy Assessment for college students progressing to their junior and senior year of undergraduate studies. Neither test, ETS insists, is designed to be an entrance examination. Rather, they are packaged and promoted as diagnostic assessments. We are in the grips of the Information Age where information literacy is a prized skill. Knowledge is power. However, information literacy is not merely creating flawless documents or slick PowerPoint presentations on a home PC. It is more than being able to send photos and text messages via cell phone. Instead, information literacy is gauged by one's ability to skillfully seek, access, and retrieve valid information from credible and reliable sources and using that information appropriately. It involves strong online search strategies and advanced critical thinking skills. And, although it is not clear whether they seized the opportunity or inherited it by default, librarians are in the vanguard of teaching information literacy to the next generation of would-be power brokers.
    This book is evidence that Henderson and Tallman were meticulous in following their established protocols and especially in their record keeping while conducting their research. There are, however, a few issues in the study's framework and methodology that are worth noting. First, although the research was conducted in two different countries - the United Slates and Australia - it is not clear from the writing if the librarian-pupil pairs of each country hailed from the same schools (making the population opportunistic) or if the sampling was indeed more randomly selected. Readers do know, though, that the librarians were free to select the students they tutored from within their respective schools. Thus, there appears to he no randomness. Second, "[t]he data collection tools and questionnaires were grounded in a [single] pilot study with a [single] teacher-Iibrarian" (p. 7). Neither the procedures used nor the data collected from the pilot study are presented to establish its reliability and validity. Therefore, readers are left with only limited confidence in the study's instrumentation. Further, it is obvious from the reading, and admitted by the researchers, that the recording equipment in open view of the study's subjects skewed the data. That is, one of the librarians tinder study confessed that were it not for the cameras, she would have completely deserted one of her lessons when encountering what she perceived to be overwhelming obstacles: a classic example of the Hawthorne Effect in research. Yet. despite these issues, researchers Henderson and Tallman make a respectable ease in this book for the validity of both mental models and stimulated recall. The mental models developed during the prelesson interviews seem remarkably accurate when observing the school librarians during the lessons. Additionally, while the librarians were able to adapt their lessons based on situations, they generally did so within their mental models of what constitutes good teachers and good teaching.
    As for the value of reflecting on their teaching performance, the authors report the not-so-startling denouement that while it is easy to identify and define malpractice and to commit to changing performance errors, it is often difficult to actually implement those improvements. Essentially, what is first learned is best learned and what is most used is best used. In the end, however, the authors rightfully call for further study to be conducted by themselves and others. ETS's core ICT Literacy Assessment is not currently a mandatory college entrance examination. Neither is the advanced ICT Literacy Assessment a mandatory examination for promotion to upper level undergraduate studies. But it would be naïve not to expect some enterprising institutions of higher education to at least consider making them so in the very near future. Consequently, librarians of all stripes (public. academic, school, or others) would do well to read and study Stimulated Recall and Mental Models if they are truly committed to leading the charge on advancing information literacy in the Information Age. In this book are some valuable how-tos for instructing patrons on searching electronic databases. And some of those same principles could be applicable to other areas of information literacy instruction."
  13. Human perspectives in the Internet society : culture, psychology and gender; International Conference on Human Perspectives in the Internet Society <1, 2004, Cádiz> (2004) 0.03
    0.02684306 = product of:
      0.05368612 = sum of:
        0.0040776334 = product of:
          0.016310534 = sum of:
            0.016310534 = weight(_text_:based in 91) [ClassicSimilarity], result of:
              0.016310534 = score(doc=91,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11531715 = fieldWeight in 91, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=91)
          0.25 = coord(1/4)
        0.049608488 = sum of:
          0.031619113 = weight(_text_:assessment in 91) [ClassicSimilarity], result of:
            0.031619113 = score(doc=91,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.12199845 = fieldWeight in 91, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.015625 = fieldNorm(doc=91)
          0.017989375 = weight(_text_:22 in 91) [ClassicSimilarity], result of:
            0.017989375 = score(doc=91,freq=4.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.109432176 = fieldWeight in 91, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=91)
      0.5 = coord(2/4)
    
    Classification
    303.48/33 22 (LoC)
    DDC
    303.48/33 22 (LoC)
    Footnote
    Rez. in: JASIST 58(2007) no.1, S.150-151 (L. Westbrook): "The purpose of this volume is to bring together various analyses by international scholars of the social and cultural impact of information technology on individuals and societies (preface, n.p.). It grew from the First International Conference on Human Perspectives in the Internet Society held in Cadiz, Spain, in 2004. The editors and contributors have addressed an impressive array of significant issues with rigorous research and insightful analysis although the resulting volume does suffer from the usual unevenness in depth and content that affects books based on conference proceedings. Although the $256 price is prohibitive for many individual scholars, the effort to obtain a library edition for perusal regarding particular areas of interest is likely to prove worthwhile. Unlike many international conferences that are able to attract scholars from only a handful of nations, this genuinely diverse conference included research conducted in Australia, Beijing, Canada, Croatia, the Czech Republic, England, Fiji, Germany, Greece, Iran, Ireland, Israel, Italy, Japan, Jordan, Malaysia, Norway, Russia, Scotland, South Africa, Sweden, Taiwan, and the United States. The expense of a conference format and governmental travel restrictions may have precluded greater inclusion of the work being done to develop information technology for use in nonindustrialized nations in support of economic, social justice, and political movements. Although the cultural variants among these nations preclude direct cross-cultural comparisons, many papers carefully provide sufficient background information to make basic conceptual transfers possible. A great strength of the work is the unusual combination of academic disciplines that contributes substantially to the depth of many individual papers, particularly when they are read within the larger context of the entire volume. Although complete professional affiliations are not universally available, the authors who did name their affiliation come from widely divergent disciplines including accounting, business administration, architecture, business computing, communication, computing, economics, educational technology, environmental management, experimental psychology, gender research in computer science, geography, human work sciences, humanistic informatics, industrial engineering, information management, informatics in transport and telecommunications, information science, information technology, management, mathematics, organizational behavior, pedagogy, psychology, telemedicine, and women's education. This is all to the good, but the lack of representation from departments of women's studies, gender studies, and library studies certainly limits the breadth and depth of the perspectives provided.
    The editorial and peer review processes appear to be slightly spotty in application. All of the 55 papers are in English but a few of them are in such need of basic editing that they are almost incomprehensible in sections. Consider, for example, the following: "So, the meaning of region where we are studying on, should be discovered and then affect on the final plan" (p. 346). The collection shows a strong array of methodological approaches including quantitative, qualitative, and mixed methods studies; however, a few of the research efforts exhibit fundamental design flaws. Consider, for example, the study that "set[s] out to show that nurses as care-givers find it difficult to transfer any previously acquired technological skills into their work based on technology needs (p. 187). After studying 39 female and 6 male nurses, this study finds, not surprisingly, exactly what it "set out" to find. Rather than noting the limitations of sample size and data gathering techniques, the paper firmly concludes that nurses can be technologists "only in areas of technology that support their primary role as carers" (p. 188). Finally, some of the papers do not report on original research but are competent, if brief, summaries of theories or concepts that are covered in equal depth elsewhere. For example, a three-page summary of "the major personality and learning theories" (p. 3) is useful but lacks the intellectual depth or insight needed to contribute substantially to the field. These problems with composition, methodological rigor, and theoretical depth are not uncommon in papers designed for a broadly defined conference theme. The authors may have been writing for an in-person audience and anticipating thoughtful postpresentation discussions; they probably had no idea of the heavy price tag put on their work. The editors, however, might have kept that $256 in mind and exercised a heavier editorial hand. Perhaps the publisher could have paid for a careful subject indexing of the work as a substantive addition to the author index provided. The complexity of the subject domains included in the volume certainly merits careful indexing.
    The volume is organized into 13 sections, each of which contains between two and eight conference papers. As with most conferences, the papers do not cover the issues in each section with equal weight or depth but the editors have grouped papers into reasonable patterns. Section 1 covers "understanding online behavior" with eight papers on problems such as e-learning attitudes, the neuropsychology of HCI, Japanese blogger motivation, and the dividing line between computer addiction and high engagement. Sections 2 (personality and computer attitudes), 3 (cyber interactions), and 4 (new interaction methods) each contain only two papers on topics such as helmet-mounted displays, online energy audits, and the use of ICT in family life. Sections 6, 7, and 8 focus on gender issues with papers on career development, the computer literacy of Malaysian women, mentoring, gaming, and faculty job satisfaction. Sections 9 and 10 move to a broader examination of cyber society and its diversity concerns with papers on cultural identity, virtual architecture, economic growth's impact on culture, and Iranian development impediments. Section 11's two articles on advertising might well have been merged with those of section 13's ebusiness. Section 12 addressed education with papers on topics such as computer-assisted homework, assessment, and Web-based learning. It would have been useful to introduce each section with a brief definition of the theme, summaries of the major contributions of the authors, and analyses of the gaps that might be addressed in future conferences. Despite the aforementioned concerns, this volume does provide a uniquely rich array of technological analyses embedded in social context. An examination of recent works in related areas finds nothing that is this complex culturally or that has such diversity of disciplines. Cultural Production in a Digital Age (Klinenberg, 2005), Perspectives and Policies on ICT in Society (Berleur & Avgerou, 2005), and Social, Ethical, and Policy Implications of Information Technology (Brennan & Johnson, 2004) address various aspects of the society/Internet intersection but this volume is unique in its coverage of psychology, gender, and culture issues in cyberspace. The lip service often given to global concerns and the value of interdisciplinary analysis of intransigent social problems seldom develop into a genuine willingness to listen to unfamiliar research paradigms. Academic silos and cultural islands need conferences like this one-willing to take on the risk of examining the large questions in an intellectually open space. Editorial and methodological concerns notwithstanding, this volume merits review and, where appropriate, careful consideration across disciplines."
  14. SIGIR'04 : Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval (2004) 0.03
    0.026661905 = product of:
      0.05332381 = sum of:
        0.008155267 = product of:
          0.032621067 = sum of:
            0.032621067 = weight(_text_:based in 4144) [ClassicSimilarity], result of:
              0.032621067 = score(doc=4144,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2306343 = fieldWeight in 4144, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4144)
          0.25 = coord(1/4)
        0.04516854 = weight(_text_:term in 4144) [ClassicSimilarity], result of:
          0.04516854 = score(doc=4144,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 4144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4144)
      0.5 = coord(2/4)
    
    Content
    Enthält u.a. die Beiträge: Liu, S., F. Liu u. C. Yu u.a.: An effective approach to document retrieval via utilizing WordNet and recognizing phrases; Lau, R.Y.K., P.D. Bruza u. D. Song: Belief revision for adaptive information retrieval; Kokiopoulou, E., Y. Saad: Polynomial filtering in Latent semantic indexing for information retrieval; He, X., D. Cai u. H. Liu u.a.: Locality preserving indexing for document representation; Tang, C., S. Dwarkadas u. Z. Xu u.a.: On scaling Latent semantic indexing for large peer-to peer systems; Yu, W., Y. Gong: Document clustering by concept factorization; Kazai, G., M. Lalmas: The overlap problem in content-oriented XML retrieval evaluation; Kamps, J., M. de Rijke u. B. Sigurbjörnsson: Length normalization in XML retrieval; Liu, A., Q. Zou u. W.W. Chu: Configurable indexing and ranking for XML information retrieval; Zhang, L., Y. Pan u. T. Zhang: Focused named entity recognition using machine learning; Xu, J., R. Weischedel u. A. Licuanan: Evaluation of an extraction-based approach to answering definitional questions; Chieu, H.L., Y.K. Lee: Query based event extraction along a timeline; Yu, K., V. Tresp u. S. Yu: A nonparametric hierarchical Bayesian framework for information filtering; Liu, X., W.B. Croft: Cluster-based retrieval using language models; Silvestri, F., A. Orlando u. R. Perego: Assigning identifters to documents to enhance the clustering property of fulltext indexes; Amitay, E., D. Carmel u. R. Lempel u.a.: Scaling IR-system evaluation using Term Relevance Sets; Buckley, C., E.M. Voorhees: Retrieval evaluation with incomplete information; Cheng, P.J., J.W. Teng u. R.C. Chen u.a.: Translating unknown queries with Web corpora for cross-language information languages; Fan, J., Y. Gao u. H. Luo u.a.: Automatic image automation by using concept-sensitive salient objects for image content representation; Amitay, E., N. Har'El u. R. Sivian u.a.: Web-a-Where: geotagging web content; Shen, D., Z. Chen u. Q. Yang u.a.: Web page classification through summarization; McLaughlin, M.R., J.L. Herlocker: A collaborative filtering algorithm and evaluation metric that accurately model the user experience; Fan, W., M. Luo u. L. Wang u.a.: Tuning before feedback: combining ranking discovery and blind feedback for robust retrieval.
  15. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.03
    0.026033338 = product of:
      0.03471112 = sum of:
        0.0057666446 = product of:
          0.023066578 = sum of:
            0.023066578 = weight(_text_:based in 2047) [ClassicSimilarity], result of:
              0.023066578 = score(doc=2047,freq=12.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16308308 = fieldWeight in 2047, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.25 = coord(1/4)
        0.02258427 = weight(_text_:term in 2047) [ClassicSimilarity], result of:
          0.02258427 = score(doc=2047,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.103105664 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.006360204 = product of:
          0.012720408 = sum of:
            0.012720408 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
              0.012720408 = score(doc=2047,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.07738023 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    AHUJA and SATIJA (Relevance of Ranganathan's Classification Theory in the Age of Digital Libraries) note that traditional bibliographic classification systems have been applied in the digital environment with only limited success. They find that the "inherent flexibility of electronic manipulation of documents or their surrogates should allow a more organic approach to allocation of new subjects and appropriate linkages between subject hierarchies." (p. 18). Ahija and Satija also suggest that it is necessary to shift from a "subject" focus to a "need" focus when applying classification theory in the digital environment. They find Ranganathan's framework applicable in the digital environment. Although Ranganathan's focus is "subject oriented and hence emphasise the hierarchical and linear relationships" (p. 26), his framework "can be successfully adopted with certain modifications ... in the digital environment." (p. 26). SHAH and KUMAR (Model for System Unification of Geographical Schedules (Space Isolates)) report an a plan to develop a single schedule for geographical Subdivision that could be used across all classification systems. The authors argue that this is needed in order to facilitate interoperability in the digital environment. SAN SEGUNDO MANUEL (The Representation of Knowledge as a Symbolization of Productive Electronic Information) distills different approaches and definitions of the term "representation" as it relates to representation of knowledge in the library and information science literature and field. SHARADA (Linguistic and Document Classification: Paradigmatic Merger Possibilities) suggests the development of a universal indexing language. The foundation for the universal indexing language is Chomsky's Minimalist Program and Ranganathan's analytico-synthetic classification theory; Acording to the author, based an these approaches, it "should not be a problem" (p. 62) to develop a universal indexing language.
    SELVI (Knowledge Classification of Digital Information Materials with Special Reference to Clustering Technique) finds that it is essential to classify digital material since the amount of material that is becoming available is growing. Selvi suggests using automated classification to "group together those digital information materials or documents that are "most similar" (p. 65). This can be attained by using Cluster analysis methods. PRADHAN and THULASI (A Study of the Use of Classification and Indexing Systems by Web Resource Directories) compare and contrast the classificatory structures of Google, Yahoo, and Looksmart's directories and compare the directories to Dewey Decimal Classification, Library of Congress Classification and Colon Classification's classificatory structures. They find differentes between the directories' and the bibliographic classification systems' classificatory structures and principles. These differentes stem from the fact that bibliographic classification systems are used to "classify academic resources for the research community" (p. 83) and directories "aim to categorize a wider breath of information groups, entertainment, recreation, govt. information, commercial information" (p. 83). NEELAMEGHAN (Hierarchy, Hierarchical Relation and Hierarchical Arrangement) reviews the concept of hierarchy and the formation of hierarchical structures across a variety of domains. NEELAMEGHAN and PRADAD (Digitized Schemes for Subject Classification and Thesauri: Complementary Roles) demonstrate how thesaural relationships (NT, BT, and RT) can be applied to a classification scheme, the Colon Classification in this Gase. NEELAMEGHAN and ASUNDI (Metadata Framework for Describing Embodied Knowledge and Subject Content) propose to use the Generalized Facet Structure framework which is based an Ranganathan's General Theory of Knowledge Classification as a framework for describing the content of documents in a metadata element set for the representation of web documents. CHUDAMANI (Classified Catalogue as a Tool for Subject Based Information Retrieval in both Traditional and Electronic Library Environment) explains why the classified catalogue is superior to the alphabetic cata logue and argues that the same is true in the digital environment.
    PARAMESWARAN (Classification and Indexing: Impact of Classification Theory an PRECIS) reviews the PRECIS system and finds that "it Gould not escape from the impact of the theory of classification" (p. 131). The author further argues that the purpose of classification and subject indexing is the same and that both approaches depends an syntax. This leads to the conclusion that "there is an absolute syntax as the Indian theory of classification points out" (p. 131). SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 1. SA TSAN- A Computer Based Learning Package) and SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 2. Semi-Automatic Synthesis of CC Numbers) present an application to automate classification using a facet classification system, in this Gase, the Colon Classification system. GAIKAIWARI (An Interactive Application for Faceted Classification Systems) presents an application, called SRR, for managing and using a faceted classification scheme in a digital environment. IYER (Use of Instructional Technology to Support Traditional Classroom Learning: A Case Study) describes a course an "Information and Knowledge Organization" that she teaches at the University at Albany (SUNY). The course is a conceptual course that introduces the student to various aspects of knowledge organization. GOPINATH (Universal Classification: How can it be used?) lists fifteen uses of universal classifications and discusses the entities of a number of disciplines. GOPINATH (Knowledge Classification: The Theory of Classification) briefly reviews the foundations for research in automatic classification, summarizes the history of classification, and places Ranganathan's thought in the history of classification.
  16. Emerging frameworks and methods : Proceedings of the Fourth International Conference on the Conceptions of Library and Information Science (CoLIS4), Seattle, WA, July 21 - 25, 2002 (2002) 0.03
    0.025981355 = product of:
      0.05196271 = sum of:
        0.0023542228 = product of:
          0.009416891 = sum of:
            0.009416891 = weight(_text_:based in 55) [ClassicSimilarity], result of:
              0.009416891 = score(doc=55,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.06657839 = fieldWeight in 55, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=55)
          0.25 = coord(1/4)
        0.049608488 = sum of:
          0.031619113 = weight(_text_:assessment in 55) [ClassicSimilarity], result of:
            0.031619113 = score(doc=55,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.12199845 = fieldWeight in 55, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.015625 = fieldNorm(doc=55)
          0.017989375 = weight(_text_:22 in 55) [ClassicSimilarity], result of:
            0.017989375 = score(doc=55,freq=4.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.109432176 = fieldWeight in 55, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    To encourage a spirit of deeper reflection, the organizing committee invited 20-minute paper presentations, each followed by 10 minutes of discussion. (There were no separate, concurrent tracks.) This approach encouraged direct follow-up questions and discussion which carried forward from session to session, providing a satisfying sense of continuity to the overall conference theme of exploring the interaction between conceptual and empirical approaches to LIS. The expressed goals of CoLIS4 were to: - explore the existing and emerging conceptual frameworks and methods of library and information science as a field, - encourage discourse about the character and definitions of key concepts in LIS, and - examine the position of LIS among parallel contemporary domains and professions likewise concerned with information and information technology, such as computer science, management information systems, and new media and communication studies. The keynote address by Tom Wilson (University of Sheffield) provided an historical perspective on the philosophical and research frameworks of LIS in the post-World War II period. He traced the changing emphases on the objects of LIS study: definitions of information and documents; information retrieval, relevance, systems, and architectures; information users and behaviors. He raised issues of the relevance of LIS research to real-world information services and practice, and the gradual shift in research approaches from quantitative to qualitative. He concluded by stressing the ongoing need of LIS for cumulative, theory-based, and content-rich bodies of research, meaningful to practitioners and useful to contemporary LIS education.
    LIS research and evaluation methodologies fell under the same scrutiny and systematization, particularly in the presentations employing multiple and mixed methodologies. Jaana Kekäläinen's and Kalervo Järvelin's proposal for a framework of laboratory information retrieval evaluation measures, applied along with analyses of information seeking and work task contexts, employed just such a mix. Marcia Bates pulled together Bradford's Law of Scattering of decreasingly relevant information sources and three information searching techniques (browsing, directed searching, and following links) to pose the question: what are the optimum searching techniques for the different regions of information concentrations? Jesper Schneider and Pia Borlund applied bibliometric methods (document co-citation, bibliographic coupling, and co-word analysis) to augment manual thesaurus construction and maintenance. Fredrik Åström examined document keyword co-occurrence measurement compared to and then combined with bibliometric co-citation analysis to map LIS concept spaces. Ian Ruthven, Mounia Lalmas, and Keith van Rijsbergen compared system-supplied query expansion terms with interactive user query expansion, incorporating both partial relevance assessment feedback (how relevant is a document) and ostensive relevance feedback (measuring when a document is assessed as relevant over time). Scheduled in the midst of the presentations were two stimulating panel and audience discussions. The first panel, chaired by Glynn Harmon, explored the current re-positioning of many library and information science schools by renaming themselves to eliminate the "library" word and emphasize the "information" word (as in "School of Information," "Information School," and schools of "Information Studies"). Panelists Marcia Bates, Harry Bruce, Toni Carbo, Keith Belton, and Andrew Dillon presented the reasons for name changes in their own information programs, which include curricular change and expansion beyond a "stereotypical" library focus, broader contemporary theoretical approaches to information, new clientele and markets for information services and professionals, new media formats and delivery models, and new interdisciplinary student and faculty recruitment from crossover fields. Sometimes criticized for over-broadness and ambiguity-and feared by library practitioners who were trained in more traditional library schools-renaming schools both results from and occasions a renewed examination of the definitions and boundaries of the field as a whole and the educational and research missions of individual schools.
    Date
    22. 2.2007 18:56:23
    22. 2.2007 19:12:10
  17. Knowledge organization and the global information society : Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK (2004) 0.03
    0.025449887 = product of:
      0.03393318 = sum of:
        0.0023542228 = product of:
          0.009416891 = sum of:
            0.009416891 = weight(_text_:based in 3356) [ClassicSimilarity], result of:
              0.009416891 = score(doc=3356,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.06657839 = fieldWeight in 3356, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3356)
          0.25 = coord(1/4)
        0.02258427 = weight(_text_:term in 3356) [ClassicSimilarity], result of:
          0.02258427 = score(doc=3356,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.103105664 = fieldWeight in 3356, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.015625 = fieldNorm(doc=3356)
        0.008994687 = product of:
          0.017989375 = sum of:
            0.017989375 = weight(_text_:22 in 3356) [ClassicSimilarity], result of:
              0.017989375 = score(doc=3356,freq=4.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.109432176 = fieldWeight in 3356, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3356)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Inhalt: Session 1 A: Theoretical Foundations of Knowledge Organization 1 Hanne Albrechtsen, Hans H K Andersen, Bryan Cleal and Annelise Mark Pejtersen: Categorical complexity in knowledge integration: empirical evaluation of a cross-cultural film research collaboratory; Clare Beghtol: Naive classification systems and the global information society; Terence R Smith and Marcia L Zeng: Concept maps supported by knowledge organization structures; B: Linguistic and Cultural Approaches to Knowledge Organization 1 Rebecca Green and Lydia Fraser: Patterns in verbal polysemy; Maria J López-Huertas, MarioBarite and Isabel de Torres: Terminological representation of specialized areas in conceptual structures: the case of gender studies; Fidelia Ibekwe-SanJuan and Eric SanJuan: Mining for knowledge chunks in a terminology network Session 2 A: Applications of Artificial Intelligence and Knowledge Representation 1 Jin-Cheon Na, Haiyang Sui, Christopher Khoo, Syin Chan and Yunyun Zhou: Effectiveness of simple linguistic processing in automatic sentiment classification of product reviews; Daniel J O'Keefe: Cultural literacy in a global information society-specific language: an exploratory ontological analysis utilizing comparative taxonomy; Lynne C Howarth: Modelling a natural language gateway to metadata-enabled resources; B: Theoretical Foundations of Knowledge Organization 2: Facets & Their Significance Ceri Binding and Douglas Tudhope: Integrating faceted structure into the search process; Vanda Broughton and Heather Lane: The Bliss Bibliographic Classification in action: moving from a special to a universal faceted classification via a digital platform; Kathryn La Barre: Adventures in faceted classification: a brave new world or a world of confusion? Session 3 A: Theoretical Foundations of Knowledge Organization 3 Elin K Jacob: The structure of context: implications of structure for the creation of context in information systems; Uta Priss: A semiotic-conceptual framework for knowledge representation Giovanni M Sacco; Accessing multimedia infobases through dynamic taxonomies; Joseph T Tennis: URIS and intertextuality: incumbent philosophical commitments in the development of the semantic web; B: Social & Sociological Concepts in Knowledge Organization Grant Campbell: A queer eye for the faceted guy: how a universal classification principle can be applied to a distinct subculture; Jonathan Furner and Anthony W Dunbar: The treatment of topics relating to people of mixed race in bibliographic classification schemes: a critical ace-theoretic approach; H Peter Ohly: The organization of Internet links in a social science clearing house; Chern Li Liew: Cross-cultural design and usability of a digital library supporting access to Maori cultural heritage resources: an examination of knowledge organization issues; Session 4 A: Knowledge Organization of Universal and Special Systems 1: Dewey Decimal Classification Sudatta Chowdhury and G G Chowdhury: Using DDC to create a visual knowledge map as an aid to online information retrieval; Joan S Mitchell: DDC 22: Dewey in the world, the world in Dewey; Diane Vizine-Goetz and Julianne Beall: Using literary warrant to define a version of the DDCfor automated classification services; B: Applications in Knowledge Representation 2 Gerhard J A Riesthuis and Maja Zumer: FRBR and FRANAR: subject access; Victoria Frâncu: An interpretation of the FRBR model; Moshe Y Sachs and Richard P Smiraglia: From encyclopedism to domain-based ontology for knowledge management: the evolution of the Sachs Classification (SC); Session 5 A: Knowledge Organization of Universal and Special Systems 2 Ágnes Hajdu Barát: Knowledge organization of the Universal Decimal Classification: new solutions, user friendly methods from Hungary; Ia C McIlwaine: A question of place; Aida Slavic and Maria Inês Cordeiro: Core requirements for automation of analytico-synthetic classifications;
    B: Applications in Knowledge Representation 3 Barbara H Kwasnik and You-Lee Chun: Translation of classifications: issues and solutions as exemplified in the Korean Decimal Classification; Hur-Li Lee and Jennifer Clyde: Users' perspectives of the "Collection" and the online catalogue; Jens-Erik Mai: The role of documents, domains and decisions in indexing Session 6 A: Knowledge Organization of Universal and Special Systems 3 Stella G Dextre Clarke, Alan Gilchrist and Leonard Will: Revision and extension of thesaurus standards; Michèle Hudon: Conceptual compatibility in controlled language tools used to index and access the content of moving image collections; Antonio Garcia Jimdnez, Félix del Valle Gastaminza: From thesauri to ontologies: a case study in a digital visual context; Ali Asghar Shiri and Crawford Revie: End-user interaction with thesauri: an evaluation of cognitive overlap in search term selection; B: Special Applications Carol A Bean: Representation of medical knowledge for automated semantic interpretation of clinical reports; Chew-Hung Lee, Christopher Khoo and Jin-Cheon Na: Automatic identification of treatment relations for medical ontology learning: an exploratory study; A Neelameghan and M C Vasudevan: Integrating image files, case records of patients and Web resources: case study of a knowledge Base an tumours of the central nervous system; Nancy J Williamson: Complementary and alternative medicine: its place in the reorganized medical sciences in the Universal Decimal Classification; Session 7 A: Applications in Knowledge Representation 4 Claudio Gnoli: Naturalism vs pragmatism in knowledge organization; Wouter Schallier: On the razor's edge: between local and overall needs in knowledge organization; Danielle H Miller: User perception and the online catalogue: public library OPAC users "think aloud"; B: Knowledge Organization in Corporate Information Systems Anita S Coleman: Knowledge structures and the vocabulary of engineering novices; Evelyne Mounier and Céline Paganelli: The representation of knowledge contained in technical documents: the example of FAQs (frequently asked questions); Martin S van der Walt: A classification scheme for the organization of electronic documents in small, medium and micro enterprises (SMMEs); Session 8 A: Knowledge Organization of Non-print Information: Sound, Image, Multimedia Laura M Bartoto, Cathy S Lowe and Sharon C Glotzer: Information management of microstructures: non-print, multidisciplinary information in a materials science digital library; Pauline Rafferty and Rob Hidderley: A survey of Image trieval tools; Richard P Smiraglia: Knowledge sharing and content genealogy: extensiog the "works" model as a metaphor for non-documentary artefacts with case studies of Etruscan artefacts; B: Linguistic and Cultural Approaches to Knowledge Organization 2 Graciela Rosemblat, Tony Tse and Darren Gemoets: Adapting a monolingual consumer health system for Spanish cross-language information retrieval; Matjaz Zalokar: Preparation of a general controlled vocabulary in Slovene and English for the COBISS.SI library information system, Slovenia; Marianne Dabbadie, Widad Mustafa El Hadi and Francois Fraysse: Coaching applications: a new concept for usage testing an information systems. Testing usage an a corporate information system with K-Now; Session 9 Theories of Knowledge and Knowledge Organization Keiichi Kawamura: Ranganathan and after: Coates' practice and theory; Shiyan Ou, Christopher Khoo, Dion H Goh and Hui-Ying Heng: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization; Iolo Jones, Daniel Cunliffe, Douglas Tudhope: Natural language processing and knowledge organization systems as an aid to retrieval
    Footnote
    Das Rahmenthema der Tagung kam aufgrund des vor und nach der ISKO-Konferenz abgehaltenen "UN World Summit an an Information Society" zustande. Im Titel des Buches ist die "globale Wissensgesellschaft" freilich eher irreführend, da keiner der darin abgedruckten Beiträge zentral davon handelt. Der eine der beiden Vorträge, die den Begriff selbst im Titel anführen, beschäftigt sich mit der Konstruktion einer Taxonomie für "cultural literacy" (O'Keefe), der andere mit sogenannten "naiven Klassifikationssystemen" (Beghtol), d.h. solchen, die im Gegensatz zu "professionellen" Systemen von Personen ohne spezifisches Interesse an klassifikatorischen Fragen entwickelt wurden. Beiträge mit "multi-kulti"-Charakter behandeln etwa Fragen wie - kulturübergreifende Arbeit, etwa beim EU-Filmarchiv-Projekt Collate (Albrechtsen et al.) oder einem Projekt zur Maori-Kultur (Liew); - Mehrsprachigkeit bzw. Übersetzung, z.B. der koreanischen Dezimalklassifikation (Kwasnik & Chun), eines auf der Sears ListofSubject Headings basierenden slowenischen Schlagwortvokabulars (Zalokar), einer spanisch-englischen Schlagwortliste für Gesundheitsfragen (Rosemblat et al.); - universelle Klassifikationssysteme wie die Dewey-Dezimalklassifikation (Joan Mitchell über die DDC 22, sowie zwei weitere Beiträge) und die Internationale Dezimalklassifikation (la McIlwaine über Geographika, Nancy Williamson über Alternativ- und Komplementärmedizin in der UDC). Unter den 55 Beiträgen finden sich folgende - aus der Sicht des Rezensenten - besonders interessante thematische "Cluster": - OPAC-orientierte Beiträge, etwa über die Anforderungen bei derAutomatisierung analytisch-synthetischer Klassifikationssysteme (Slavic & Cordeiro) sowie Beiträge zu Benutzerforschung und -verhalten (Lee & Clyde; Miller); - Erschliessung und Retrieval von visuellen bzw. multimedialen Ressourcen, insbesondere mit Ausrichtung auf Thesauri (Hudin; Garcia Jimenez & De Valle Gastaminza; Rafferty & Hidderley); - Thesaurus-Standards (Dextre Clark et al.), Thesauri und Endbenutzer (Shiri & Revie); - Automatisches Klassifizieren (Vizine-Goetz & Beall mit Bezug auf die DDC; Na et al. über methodische Ansätze bei der Klassifizierung von Produktbesprechungen nach positiven bzw. negativen Gefühlsäusserungen); - Beiträge über (hierzulande) weniger bekannte Systeme wie Facettenklassifikation einschliesslich der Bliss-Klassifikation sowie der Umsetzung der Ideen von Ranganathan durch E.J. Coates (vier Vorträge), die Sachs-Klassifikation (Sachs & Smiraglia) sowie M. S. van der Walts Schema zur Klassifizierung elektronischer Dokumente in Klein- und Mittelbetrieben. Auch die übrigen Beiträge sind mehrheitlich interessant geschrieben und zeugen vom fachlichen Qualitätsstandard der ISKO-Konferenzen. Der Band kann daher bibliothekarischen bzw. informationswissenschaftlichen Ausbildungseinrichtungen sowie Bibliotheken mit Sammelinteresse für Literatur zu Klassifikationsfragen ausdrücklich empfohlen werden. Ausserdem darf der nächsten (= neunten) internationalen ISKO-Konferenz, die 2006 in Wien abgehalten werden soll, mit Interesse entgegengesehen werden.
  18. Limb, P.: Digital dilemmas and solutions (2004) 0.02
    0.024938494 = product of:
      0.049876988 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 4500) [ClassicSimilarity], result of:
              0.018833783 = score(doc=4500,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 4500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4500)
          0.25 = coord(1/4)
        0.04516854 = weight(_text_:term in 4500) [ClassicSimilarity], result of:
          0.04516854 = score(doc=4500,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 4500, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4500)
      0.5 = coord(2/4)
    
    Abstract
    Librarians face daunting challenges posed by recent trends in technology, publishing and education as the impact of a globalising information economy forces a rethink of both library strategic directions and everyday library operations. This book brings together the current main issues and dilemmas facing libraries; the book clearly shows how to deal with them, and provides a best-practice guide to solutions based an the most up-to-date thinking. Key Features - Provides analysis of recent trends and relevant and viable solutions to problems facing all librarians - Draws an the author's international and practical experience in libraries
    Content
    Readership The book will be useful for: staff at all managerial and supervisory levels within library and information services; students and staff in library/information studies courses (undergraduate and postgraduate); educationalists; publishers; and all people interested in recent information and digital trends. Contents The impact an libraries of a globalising information economy-trends in technology; publishing, and education; changes in the form and delivery of information; changes in the nature of library operations The information game - how to locate, acquire, present and manage information in the Internet age; how to manage print versus electronic formats; access versus ownership: resolving the dilemma in the short and Jong term Digital presentation and preservation - how best to apply digital technologies in library operations; how best to make available e-information: text, data-sets, and audio-visual; digital preservation: advantages and disadvantages; publish or perish?; a guide to digital publishing for the librarian User perspectives - attracting users to the library: physically and virtually; how best to teach users to exploit and evaluate the 'new library'; what reference and technical services users now want and how to provide them Financial constrains and solutions-escalating material budgets: digital solutions and illusions; staff and overhead costs: using digital applications for 'win-win' solutions; cooperation versus competition Professional and workplace challenges - coping with constant change; the technical and reference divide in the digital age; avoiding information overkill; balancing specialist and generalist skills among librarians Resolving ethical and legal dilemmas
  19. Qualman, E.: Socialnomics : how social media transforms the way we live and do business (2009) 0.02
    0.024938494 = product of:
      0.049876988 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 3587) [ClassicSimilarity], result of:
              0.018833783 = score(doc=3587,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 3587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3587)
          0.25 = coord(1/4)
        0.04516854 = weight(_text_:term in 3587) [ClassicSimilarity], result of:
          0.04516854 = score(doc=3587,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 3587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=3587)
      0.5 = coord(2/4)
    
    Abstract
    A fascinating, research-based look at the impact of social media on businesses and consumers around the world, and what's in store for the future Social Media. You've heard the term, even if you don't use the tools. But just how big has social media become? Social media has officially surpassed pornography as the top activity on the Internet. People would rather give up their e-mail than their social network. It is so powerful that it is causing a macro shift in the way we live and conduct business. Brands can now be strengthened or destroyed by the use of social media. Online networking sites are being used as giant, free focus groups. Advertising is less effective at influencing consumers than the opinions of their peers. If you aren't using social media in your business strategy, you are already behind your competition. * Explores how the concept of "Socialnomics" is changing the way businesses produce, market, and sell, eliminating inefficient marketing and middlemen, and making products easier and cheaper for consumers to obtain * Learn how successful businesses are connecting with consumers like never before via Twitter, Facebook, YouTube, and other social media sites * A must-read for anyone wanting to learn about, and harness the power of social media, rather than be squashed by it * Author Erik Qualman is a former online marketer for several Top 100 brands and the current Global Vice President of Online Marketing for the world's largest private education firm Socialnomics is essential book for anyone who wants to understand the implications of social media, and how businesses can tap the power of social media to increase their sales, cut their marketing costs, and reach consumers directly.
  20. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.02
    0.024205387 = product of:
      0.032273848 = sum of:
        0.0033293737 = product of:
          0.013317495 = sum of:
            0.013317495 = weight(_text_:based in 3964) [ClassicSimilarity], result of:
              0.013317495 = score(doc=3964,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.09415606 = fieldWeight in 3964, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.25 = coord(1/4)
        0.02258427 = weight(_text_:term in 3964) [ClassicSimilarity], result of:
          0.02258427 = score(doc=3964,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.103105664 = fieldWeight in 3964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.006360204 = product of:
          0.012720408 = sum of:
            0.012720408 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.012720408 = score(doc=3964,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
    The papers discussing the transformation of traditional tools locate the point of transformation in different places. Some, like the papers an DDC, LCC and UDC, suggest that these schemes can be imported into the networked environment and used as a basis for improving access to networked resources, just as they improve access to physical resources. While many of these papers are intriguing, I suspect that convincing those outside the profession will be difficult. In particular, Edward O'Neill and his colleagues, while offering a fascinating suggestion for preserving the Library of Congress Subject Headings and their associated infrastructure by converting them into a faceted scheme, will have an uphill battle convincing the unconverted that LCSH has a place in the online networked environment. Two papers deserve mention for taking a different approach: both Francis Devadason and Maria Ines Cordeiro suggest that we import concepts and techniques rather than realized schemes. Devadason argues for the creation of a faceted pre-coordinate indexing scheme for Internet resources based an Deep Structure indexing, which originates in Bhattacharyya's Postulate-Based Permuted Subject Indexing and in Ranganathan's chain indexing techniques. Cordeiro takes up the vitally important role of authority control in Web environments, suggesting that the techniques of authority control be expanded to enhance user flexibility. By focusing her argument an the concepts rather than an the existing tools, and by making useful and important distinctions between library and non-library uses of authority control, Cordeiro suggests that librarianship's contribution to networked access has less to do with its tools and infrastructure, and more to do with concepts that need to be boldly reinvented. The excellence of this collection derives in part from the energy, insight and diversity of the papers. Credit also goes to the planning and forethought that went into the conference itself by OCLC, the IFLA Classification and Indexing Section, the IFLA Information Technology Section, and the Program Committee, headed by editor I.C. McIlwaine. This collection avoids many of the problems of conference proceedings, and instead offers the best of such proceedings: detail, diversity, and judicious mixtures of theory and practice. Some of the disadvantages that plague conference proceedings appear here. Busy scholars sometimes interpret the concept of "camera-ready copy" creatively, offering diagrams that could have used some streamlining, and label boxes that cut off the tops or bottoms of letters. The papers are necessarily short, and many of them raise issues that deserve more extensive treatment. The issue of subject access in networked environments is crying out for further synthesis at the conceptual and theoretical level. But no synthesis can afford to ignore the kind of energetic, imaginative and important work that the papers in these proceedings represent."

Languages

  • e 201
  • d 103
  • m 5
  • es 2
  • f 1
  • i 1
  • ro 1
  • More… Less…

Types

  • s 98
  • b 5
  • el 3
  • i 3
  • n 1
  • x 1
  • More… Less…

Themes

Subjects

Classifications