Search (675 results, page 1 of 34)

  • × type_ss:"m"
  1. Kageura, K.: ¬The dynamics of terminology : a descriptive theory of term formation and terminological growth (2002) 0.08
    0.08313005 = product of:
      0.11084006 = sum of:
        0.005097042 = product of:
          0.020388167 = sum of:
            0.020388167 = weight(_text_:based in 1787) [ClassicSimilarity], result of:
              0.020388167 = score(doc=1787,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14414644 = fieldWeight in 1787, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.25 = coord(1/4)
        0.09779277 = weight(_text_:term in 1787) [ClassicSimilarity], result of:
          0.09779277 = score(doc=1787,freq=24.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.44646066 = fieldWeight in 1787, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1787)
        0.007950256 = product of:
          0.015900511 = sum of:
            0.015900511 = weight(_text_:22 in 1787) [ClassicSimilarity], result of:
              0.015900511 = score(doc=1787,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.09672529 = fieldWeight in 1787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1787)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The discovery of rules for the systematicity and dynamics of terminology creations is essential for a sound basis of a theory of terminology. This quest provides the driving force for the dynamics of terminology in which Dr Kageura demonstrates the interaction of these two factors on a specific corpus of Japanese terminology which, beyond the necessary linguistic circumstances, also has a model character for similar studies. His detailed examination of the relationships between terms and their constituent elements, the relationships among the constituent elements and the type of conceptual combinations used in the construction of the terminology permits deep insights into the systematic thought processes underlying term creation. To compensate for the inherent limitation of a purely descriptive analysis of conceptual patterns, Dr. Kageura offers a quantitative analysis of the patterns of the growth of terminology.
    Content
    PART I: Theoretical Background 7 Chapter 1. Terminology: Basic Observations 9 Chapter 2. The Theoretical Framework for the Study of the Dynamics of Terminology 25 PART II: Conceptual Patterns of Term Formation 43 Chapter 3. Conceptual Patterns of Term Formation: The Basic Descriptive Framework 45 Chapter 4. Conceptual Categories for the Description of Formation Patterns of Documentation Terms 61 Chapter 5. Intra-Term Relations and Conceptual Specification Patterns 91 Chapter 6. Conceptual Patterns of the Formation of Documentation Terms 115 PART III: Quantitative Patterns of Terminological Growth 163 Chapter 7. Quantitative Analysis of the Dynamics of Terminology: A Basic Framework 165 Chapter 8. Growth Patterns of Morphemes in the Terminology of Documentation 183 Chapter 9. Quantitative Dynamics in Term Formation 201 PART IV: Conclusions 247 Chapter 10. Towards Modelling Term Formation and Terminological Growth 249 Appendices 273 Appendix A. List of Conceptual Categories 275 Appendix B. Lists of Intra-Term Relations and Conceptual Specification Patterns 279 Appendix C. List of Terms by Conceptual Categories 281 Appendix D. List of Morphemes by Conceptual Categories 295.
    Date
    22. 3.2008 18:18:53
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.112-113 (L. Bowker): "Terminology is generally understood to be the activity that is concerned with the identification, collection and processing of terms; terms are the lexical items used to describe concepts in specialized subject fields Terminology is not always acknowledged as a discipline in its own right; it is sometimes considered to be a subfield of related disciplines such as lexicography or translation. However, a growing number of researchers are beginning to argue that terminology should be recognized as an autonomous discipline with its own theoretical underpinnings. Kageura's book is a valuable contribution to the formulation of a theory of terminology and will help to establish this discipline as an independent field of research. The general aim of this text is to present a theory of term formation and terminological growth by identifying conceptual regularities in term creation and by laying the foundations for the analysis of terminological growth patterns. The approach used is a descriptive one, which means that it is based an observations taken from a corpus. It is also synchronic in nature and therefore does not attempt to account for the evolution of terms over a given period of time (though it does endeavour to provide a means for predicting possible formation patterns of new terms). The descriptive, corpus-based approach is becoming very popular in terminology circles; however, it does pose certain limitations. To compensate for this, Kageura complements his descriptive analysis of conceptual patterns with a quantitative analysis of the patterns of the growth of terminology. Many existing investigations treat only a limited number of terms, using these for exemplification purposes. Kageura argues strongly (p. 31) that any theory of terms or terminology must be based an the examination of the terminology of a domain (i.e., a specialized subject field) in its entirety since it is only with respect to an individual domain that the concept of "term" can be established. To demonstrate the viability of his theoretical approach, Kageura has chosen to investigate and describe the domain of documentation, using Japanese terminological data. The data in the corpus are derived from a glossary (Wersig and Neveling 1984), and although this glossary is somewhat outdated (a fact acknowledged by the author), the data provided are nonetheless sufficient for demonstrating the viability of the approach, which can later be extended and applied to other languages and domains.
    Unlike some terminology researchers, Kageura has been careful not to overgeneralize the applicability of his work, and he points out the limitations of his study, a number of which are summarized an pages 254-257. For example, Kageura acknowledges that his contribution should properly be viewed as a theory of term formation and terminological growth in the field of documentation Moreover, Kageura notes that this study does not distinguish the general part and the domaindependent part of the conceptual system, nor does it fully explore the multidimensionality of the viewpoints of conceptual categorization. Kageura's honesty with regard to the complexity of terminological issues and the challenges associated with the formation of a theory of terminology is refreshing since too often in the past, the results of terminology research have been somewhat naively presented as being absolutely clearcut and applicable in all situations."
  2. Ruge, G.: Sprache und Computer : Wortbedeutung und Termassoziation. Methoden zur automatischen semantischen Klassifikation (1995) 0.08
    0.07659837 = product of:
      0.15319674 = sum of:
        0.12775593 = weight(_text_:term in 1534) [ClassicSimilarity], result of:
          0.12775593 = score(doc=1534,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.58325374 = fieldWeight in 1534, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1534)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 1534) [ClassicSimilarity], result of:
              0.05088163 = score(doc=1534,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 1534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1534)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Enthält folgende Kapitel: (1) Motivation; (2) Language philosophical foundations; (3) Structural comparison of extensions; (4) Earlier approaches towards term association; (5) Experiments; (6) Spreading-activation networks or memory models; (7) Perspective. Appendices: Heads and modifiers of 'car'. Glossary. Index. Language and computer. Word semantics and term association. Methods towards an automatic semantic classification
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.182-184 (M.T. Rolland)
  3. Nicholas, D.: Assessing information needs : tools and techniques (1996) 0.07
    0.07179573 = product of:
      0.28718293 = sum of:
        0.28718293 = sum of:
          0.2235809 = weight(_text_:assessment in 5941) [ClassicSimilarity], result of:
            0.2235809 = score(doc=5941,freq=4.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.86265934 = fieldWeight in 5941, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.078125 = fieldNorm(doc=5941)
          0.063602045 = weight(_text_:22 in 5941) [ClassicSimilarity], result of:
            0.063602045 = score(doc=5941,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.38690117 = fieldWeight in 5941, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5941)
      0.25 = coord(1/4)
    
    Date
    26. 2.2008 19:22:51
    LCSH
    Needs assessment
    Subject
    Needs assessment
  4. Bruce, H.: ¬The user's view of the Internet (2002) 0.07
    0.06993598 = sum of:
      0.003058225 = product of:
        0.0122329 = sum of:
          0.0122329 = weight(_text_:based in 4344) [ClassicSimilarity], result of:
            0.0122329 = score(doc=4344,freq=6.0), product of:
              0.14144066 = queryWeight, product of:
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.04694356 = queryNorm
              0.08648786 = fieldWeight in 4344, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.25 = coord(1/4)
      0.023954237 = weight(_text_:term in 4344) [ClassicSimilarity], result of:
        0.023954237 = score(doc=4344,freq=4.0), product of:
          0.21904005 = queryWeight, product of:
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.04694356 = queryNorm
          0.10936008 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.03815336 = weight(_text_:frequency in 4344) [ClassicSimilarity], result of:
        0.03815336 = score(doc=4344,freq=4.0), product of:
          0.27643865 = queryWeight, product of:
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.04694356 = queryNorm
          0.13801746 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.0047701527 = product of:
        0.0095403055 = sum of:
          0.0095403055 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
            0.0095403055 = score(doc=4344,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.058035173 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST. 54(2003) no.9, S.906-908 (E.G. Ackermann): "In this book Harry Bruce provides a construct or view of "how and why people are using the Internet," which can be used "to inform the design of new services and to augment our usings of the Internet" (pp. viii-ix; see also pp. 183-184). In the process, he develops an analytical tool that I term the Metatheory of Circulating Usings, and proves an impressive distillation of a vast quantity of research data from previous studies. The book's perspective is explicitly user-centered, as is its theoretical bent. The book is organized into a preface, acknowledgments, and five chapters (Chapter 1, "The Internet Story;" Chapter 2, "Technology and People;" Chapter 3, "A Focus an Usings;" Chapter 4, "Users of the Internet;" Chapter 5, "The User's View of the Internet"), followed by an extensive bibliography and short index. Any notes are found at the end of the relevant Chapter. The book is illustrated with figures and tables, which are clearly presented and labeled. The text is clearly written in a conversational style, relatively jargon-free, and contains no quantification. The intellectual structure follows that of the book for the most part, with some exceptions. The definition of several key concepts or terms are scattered throughout the book, often appearing much later after extensive earlier use. For example, "stakeholders" used repeatedly from p. viii onward, remains undefined until late in the book (pp. 175-176). The study's method is presented in Chapter 3 (p. 34), relatively late in the book. Its metatheoretical basis is developed in two widely separated places (Chapter 3, pp. 56-61, and Chapter 5, pp. 157-159) for no apparent reason. The goal or purpose of presenting the data in Chapter 4 is explained after its presentation (p. 129) rather than earlier with the limits of the data (p. 69). Although none of these problems are crippling to the book, it does introduce an element of unevenness into the flow of the narrative that can confuse the reader and unnecessarily obscures the author's intent. Bruce provides the contextual Background of the book in Chapter 1 (The Internet Story) in the form of a brief history of the Internet followed by a brief delineation of the early popular views of the Internet as an information superstructure. His recapitulation of the origins and development of the Internet from its origins as ARPANET in 1957 to 1995 touches an the highlights of this familiar story that will not be retold here. The early popular views or characterizations of the Internet as an "information society" or "information superhighway" revolved primarily around its function as an information infrastructure (p. 13). These views shared three main components (technology, political values, and implied information values) as well as a set of common assumptions. The technology aspect focused an the Internet as a "common ground an which digital information products and services achieve interoperability" (p. 14). The political values provided a "vision of universal access to distributed information resources and the benefits that this will bring to the lives of individual people and to society in general" (p. 14). The implied communication and information values portrayed the Internet as a "medium for human creativity and innovation" (p. 14). These popular views also assumed that "good decisions arise from good information," that "good democracy is based an making information available to all sectors of society," and that "wisdom is the by-product of effective use of information" (p. 15). Therefore, because the Internet is an information infrastructure, it must be "good and using the Internet will benefit individuals and society in general" (p. 15).
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).
    Bruce points out that Dervin is also concerned about how "we look at the world" in terms of "information needs and seeking" (p.60). She maintains that information scientists traditionally view information seeking and needs in terms of "contexts, users, and systems." Dervin questions whether or not, from a user's point of view, these three "points of interest" even exist. Rather it is the "micromoments of human usings" [emphasis original], and the "world viewings, seekings, and valuings" that comprise them that are real (p. 60). Using his metatheory, Bruce represents the Internet, the "object" of study, as a "chain of transformations made up of the micromoments of human usings" (p. 60). The Internet then is a "composite of usings" that, through research and study, is continuously reduced in complexity while its "essence" and "explanation" are amplified (p. 60). Bruce plans to use the Metatheory of Circulating Usings as an analytical "lens" to "tease out a characterization of the micromoments of Internet usings" from previous research an the Internet thereby exposing "the user's view of the Internet" (pp. 60-61). In Chapter 4 (Users of the Internet), Bruce presents the research data for the study. He begins with an explanation of the limits of the data, and to a certain extent, the study itself. The perspective is that of the Internet user, with a focus an use, not nonuse, thereby exluding issues such as the digital divide and universal service. The research is limited to Internet users "in modern economies around the world" (p. 60). The data is a synthesis of research from many disciplines, but mainly from those "associated with the information field" with its traditional focus an users, systems, and context rather than usings (p. 70). Bruce then presents an extensive summary of the research results from a massive literature review of available Internet studies. He examines the research for each study group in order of the amount of data available, starting with the most studied group professional users ("academics, librarians, and teachers") followed by "the younger generation" ("College students, youths, and young adults"), users of e-government information and e-business services, and ending with the general public (the least studied group) (p. 70). Bruce does a masterful job of condensing and summarizing a vast amount of research data in 49 pages. Although there is too muck to recapitulate here, one can get a sense of the results by looking at the areas of data examined for one of the study groups: academic Internet users. There is data an their frequency of use, reasons for nonuse, length of use, specific types of use (e.g., research, teaching, administration), use of discussion lists, use of e-journals, use of Web browsers and search engines, how academics learn to use web tools and services (mainly by self-instruction), factors affecting use, and information seeking habits. Bruce's goal in presenting all this research data is to provide "the foundation for constructs of the Internet that can inform stakeholders who will play a role in determining how the Internet will develop" (p. 129). These constructs are presented in Chapter 5.
    Bruce begins Chapter 5 (The Users' View of the Internet) by pointing out that the Internet not only exists as a physical entity of hardware, software, and networked connectivity, but also as a mental representation or knowledge structure constructed by users based an their usings. These knowledge structures or constructs "allow people to interpret and make sense of things" by functioning as a link between the new unknown thing with known thing(s) (p. 158). The knowledge structures or using constructs are continually evolving as people use the Internet over time, and represent the user's view of the Internet. To capture the users' view of the Internet from the research literature, Bruce uses his Metatheory of Circulating Usings. He recapitulates the theory, casting it more closely to the study of Internet use than previously. Here the reduction component provides a more detailed "understanding of the individual users involved in the micromoment of Internet using" while simultaneously the amplification component increases our understanding of the "generalized construct of the Internet" (p. 158). From this point an Bruce presents a relatively detail users' view of the Internet. He starts with examining Internet usings, which is composed of three parts: using space, using literacies, and Internet space. According to Bruce, using space is a using horizon likened to a "sphere of influence," comfortable and intimate, in which an individual interacts with the Internet successfully (p. 164). It is a "composite of individual (professional nonwork) constructs of Internet utility" (p. 165). Using literacies are the groups of skills or tools that an individual must acquire for successful interaction with the Internet. These literacies serve to link the using space with the Internet space. They are usually self-taught and form individual standards of successful or satisfactory usings that can be (and often are) at odds with the standards of the information profession. Internet space is, according to Bruce, a user construct that perceives the Internet as a physical, tangible place separate from using space. Bruce concludes that the user's view of the Internet explains six "principles" (p. 173). "Internet using is proof of concept" and occurs in contexts; using space is created through using frequency, individuals use literacies to explore and utilize Internet space, Internet space "does not require proof of concept, and is often influence by the perceptions and usings of others," and "the user's view of the Internet is upbeat and optimistic" (pp. 173-175). He ends with a section describing who are the Internet stakeholders. Bruce defines them as Internet hardware/software developers, Professional users practicing their profession in both familiar and transformational ways, and individuals using the Internet "for the tasks and pleasures of everyday life" (p. 176).
  5. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.07
    0.06586234 = product of:
      0.08781646 = sum of:
        0.004161717 = product of:
          0.016646868 = sum of:
            0.016646868 = weight(_text_:based in 1833) [ClassicSimilarity], result of:
              0.016646868 = score(doc=1833,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11769507 = fieldWeight in 1833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1833)
          0.25 = coord(1/4)
        0.028230337 = weight(_text_:term in 1833) [ClassicSimilarity], result of:
          0.028230337 = score(doc=1833,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.12888208 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1833)
        0.055424403 = sum of:
          0.039523892 = weight(_text_:assessment in 1833) [ClassicSimilarity], result of:
            0.039523892 = score(doc=1833,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.15249807 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
          0.015900511 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.015900511 = score(doc=1833,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.09672529 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
      0.75 = coord(3/4)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)
    Footnote
    Rez. in JASIST 57(2006) no.10, S.1412-1413 (R. W. White): "Information Retrieval Design is a textbook that aims to foster the intelligent user-centered design of databases for Information Retrieval (IR). The book outlines a comprehensive set of 20 factors. chosen based on prior research and the authors' experiences. that need to he considered during the design process. The authors provide designers with information on those factors to help optimize decision making. The book does not cover user-needs assessment, implementation of IR databases, or retries al systems, testing. or evaluation. Most textbooks in IR do not offer a substantive walkthrough of the design factors that need to be considered Mien des eloping IR databases. Instead. they focus on issues such as the implementation of data structures, the explanation of search algorithms, and the role of human-machine interaction in the search process. The book touches on all three, but its focus is on designing databases that can be searched effectively. not the tools to search them. This is an important distinction: despite its title. this book does not describe how to build retrieval systems. Professor Anderson utilizes his wealth of experience in cataloging and classification to bring a unique perspective on IR database design that may be useful for novices. for developers seeking to make sense of the design process, and for students as a text to supplement classroom tuition. The foreword and preface. by Jessica Milstead and James Anderson. respectively, are engaging and worthwhile reading. It is astounding that it has taken some 20 years for anyone to continue the stork of Milstead and write as extensively as Anderson does about such an important issue as IR database design. The remainder of the book is divided into two parts: Introduction and Background Issues and Design Decisions. Part 1 is a reasonable introduction and includes a glossary of the terminology that authors use in the book. It is very helpful to have these definitions early on. but the subject descriptors in the right margin are distracting and do not serve their purpose as access points to the text. The terminology is useful to have. as the authors definitions of concepts do not lit exactly with what is traditionally accepted in IR. For example. they use the term 'message' to icier to what would normally be called .'document" or "information object." and do not do a good job at distinguishing between "messages" and "documentary units". Part 2 describes components and attributes of 1R databases to help designers make design choices. The book provides them with information about the potential ramifications of their decisions and advocates a user-oriented approach to making them. Chapters are arranged in a seemingly sensible order based around these factors. and the authors remind us of the importance of integrating them. The authors are skilled at selecting the important factors in the development of seemingly complex entities, such as IR databases: how es er. the integration of these factors. or the interaction between them. is not handled as well as perhaps should be. Factors are presented in the order in which the authors feel then should be addressed. but there is no chapter describing how the factors interact. The authors miss an opportunity at the beginning of Part 2 where they could illustrate using a figure the interactions between the 20 factors they list in a way that is not possible with the linear structure of the book.
  6. Spitzer, K.L.; Eisenberg, M.B.; Lowe, C.A.: Information literacy : essential skills for the information age (2004) 0.06
    0.06028825 = product of:
      0.08038434 = sum of:
        0.004161717 = product of:
          0.016646868 = sum of:
            0.016646868 = weight(_text_:based in 3686) [ClassicSimilarity], result of:
              0.016646868 = score(doc=3686,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11769507 = fieldWeight in 3686, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 3686) [ClassicSimilarity], result of:
          0.056460675 = score(doc=3686,freq=8.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 3686, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3686)
        0.019761946 = product of:
          0.039523892 = sum of:
            0.039523892 = weight(_text_:assessment in 3686) [ClassicSimilarity], result of:
              0.039523892 = score(doc=3686,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.15249807 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: JASIST 56(2005) no.9, S.1008-1009 (D.E. Agosto): "This second edition of Information Literacy: Essential Skills for the Information Age remains true to the first edition (published in 1998). The main changes involved the updating of educational standards discussed in the text, as well as the updating of the term history. Overall, this book serves as a detailed definition of the concept of information literacy and focuses heavily an presenting and discussing related state and national educational standards and policies. It is divided into 10 chapters, many of which contain examples of U.S. and international information literacy programs in a variety of educational settings. Chapter one offers a detailed definition of information literacy, as well as tracing the deviation of the term. The term was first introduced in 1974 by Paul Zurkowski in a proposal to the national Commission an Libraries and Information Science. Fifteen years later a special ALA committee derived the now generally accepted definition: "To be information literate requires a new set of skills. These include how to locate and use information needed for problem-solving and decision-making efficiently and effectively" (American Library Association, 1989, p. 11). Definitions for a number of related concepts are also offered, including definitions for visual literacy, media literacy, computer literacy, digital literacy, and network literacy. Although the authors do define these different subtypes of information literacy, they sidestep the argument over the definition of the more general term literacy, consequently avoiding the controversy over national and world illiteracy rates. Regardless of the actual rate of U.S. literacy (which varies radically with each different definition of "literacy"), basic literacy, i.e., basic reading and writing skills, still presents a formidable educational goal in the U.S. In fact, More than 5 million high-schoolers do not read well enough to understand their textbooks or other material written for their grade level. According to the National Assessment of Educational Progress, 26% of these students cannot read material many of us world deem essential for daily living, such as road signs, newspapers, and bus schedules. (Hock & Deshler, 2003, p. 27)
    Chapter two delves more deeply into the historical evolution of the concept of information literacy, and chapter three summarizes selected information literacy research. Researchers generally agree that information literacy is a process, rather than a set of skills to be learned (despite the unfortunate use of the word "skills" in the ALA definition). Researchers also generally agree that information literacy should be taught across the curriculum, as opposed to limiting it to the library or any other single educational context or discipline. Chapter four discusses economic ties to information literacy, suggesting that countries with information literate populations will better succeed economically in the current and future information-based world economy. A recent report issued by the Basic Education Coalition, an umbrella group of 19 private and nongovernmental development and relief organizations, supports this claim based an meta-analysis of large bodies of data collected by the World Bank, the United Nations, and other international organizations. Teach a Child, Transform a Nation (Basic Education Coalition, 2004) concluded that no modern nation has achieved sustained economic growth without providing near universal basic education for its citizens. It also concluded that countries that improve their literacy rates by 20 to 30% sec subsequent GDP increases of 8 to 16%. In light of the Coalition's finding that one fourth of adults in the world's developing countries are unable to read or write, the goal of worldwide information literacy seems sadly unattainable for the present, a present in which even universal basic literacy is still a pipedream. Chapter live discusses information literacy across the curriculum as an interpretation of national standards. The many examples of school and university information literacy programs, standards, and policies detailed throughout the volume world be very useful to educators and administrators engaging in program planning and review. For example, the authors explain that economics standards included in the Goals 2000: Educate America Act are comprised of 20 benchmark content standards. They quote a two-pronged grade 12 benchmark that first entails students being able to discuss how a high school senior's working 20 hours a week while attending school might result in a reduced overall lifetime income, and second requires students to be able to describe how increasing the federal minimum wage might result in reduced income for some workers. The authors tie this benchmark to information literacy as follows: "Economic decision making requires complex thinking skills because the variables involved are interdependent.
  7. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.04
    0.044085447 = product of:
      0.08817089 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 119) [ClassicSimilarity], result of:
              0.033293735 = score(doc=119,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=119)
          0.25 = coord(1/4)
        0.07984746 = weight(_text_:term in 119) [ClassicSimilarity], result of:
          0.07984746 = score(doc=119,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.3645336 = fieldWeight in 119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=119)
      0.5 = coord(2/4)
    
    Abstract
    In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
  8. Vocabulary as a central concept in digital libraries : interdisciplinary concepts, challenges, and opportunities : proceedings of the Third International Conference on Conceptions of Library and Information Science (COLIS3), Dubrovnik, Croatia, 23-26 May 1999 (1999) 0.04
    0.043642364 = product of:
      0.08728473 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 3850) [ClassicSimilarity], result of:
              0.03295912 = score(doc=3850,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 3850, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3850)
          0.25 = coord(1/4)
        0.079044946 = weight(_text_:term in 3850) [ClassicSimilarity], result of:
          0.079044946 = score(doc=3850,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 3850, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3850)
      0.5 = coord(2/4)
    
    Content
    Enthält u.a. die Beiträge: Pharo, N.: Web information search strategies: a model for classifying Web interaction; Wang, Z., L.L. Hill u. T.R. Smith: Alexandria Digital Library metadata creator based an extensible markup language; Reid, J.: A new, task-oriented paradigm for information retrieval: implications for evaluation of information retrieval systems; Ornager, S.: Image archives in newspaper editorial offices: a service activity; Ruthven, I., M. Lalmas: Selective relevance feedback using term characteristics
  9. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.04
    0.04147133 = product of:
      0.08294266 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 4497) [ClassicSimilarity], result of:
              0.018833783 = score(doc=4497,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 4497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.25 = coord(1/4)
        0.07823421 = weight(_text_:term in 4497) [ClassicSimilarity], result of:
          0.07823421 = score(doc=4497,freq=6.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.35716853 = fieldWeight in 4497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
      0.5 = coord(2/4)
    
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  10. Gossen, T.: Search engines for children : search user interfaces and information-seeking behaviour (2016) 0.04
    0.040857024 = product of:
      0.08171405 = sum of:
        0.00411989 = product of:
          0.01647956 = sum of:
            0.01647956 = weight(_text_:based in 2752) [ClassicSimilarity], result of:
              0.01647956 = score(doc=2752,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11651218 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2752)
          0.25 = coord(1/4)
        0.07759416 = sum of:
          0.055333447 = weight(_text_:assessment in 2752) [ClassicSimilarity], result of:
            0.055333447 = score(doc=2752,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.2134973 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
          0.022260714 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.022260714 = score(doc=2752,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.1354154 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
      0.5 = coord(2/4)
    
    Abstract
    The doctoral thesis of Tatiana Gossen formulates criteria and guidelines on how to design the user interfaces of search engines for children. In her work, the author identifies the conceptual challenges based on own and previous user studies and addresses the changing characteristics of the users by providing a means of adaptation. Additionally, a novel type of search result visualisation for children with cartoon style characters is developed taking children's preference for visual information into account.
    Content
    Inhalt: Acknowledgments; Abstract; Zusammenfassung; Contents; List of Figures; List of Tables; List of Acronyms; Chapter 1 Introduction ; 1.1 Research Questions; 1.2 Thesis Outline; Part I Fundamentals ; Chapter 2 Information Retrieval for Young Users ; 2.1 Basics of Information Retrieval; 2.1.1 Architecture of an IR System; 2.1.2 Relevance Ranking; 2.1.3 Search User Interfaces; 2.1.4 Targeted Search Engines; 2.2 Aspects of Child Development Relevant for Information Retrieval Tasks; 2.2.1 Human Cognitive Development; 2.2.2 Information Processing Theory; 2.2.3 Psychosocial Development 2.3 User Studies and Evaluation2.3.1 Methods in User Studies; 2.3.2 Types of Evaluation; 2.3.3 Evaluation with Children; 2.4 Discussion; Chapter 3 State of the Art ; 3.1 Children's Information-Seeking Behaviour; 3.1.1 Querying Behaviour; 3.1.2 Search Strategy; 3.1.3 Navigation Style; 3.1.4 User Interface; 3.1.5 Relevance Judgement; 3.2 Existing Algorithms and User Interface Concepts for Children; 3.2.1 Query; 3.2.2 Content; 3.2.3 Ranking; 3.2.4 Search Result Visualisation; 3.3 Existing Information Retrieval Systems for Children; 3.3.1 Digital Book Libraries; 3.3.2 Web Search Engines 3.4 Summary and DiscussionPart II Studying Open Issues ; Chapter 4 Usability of Existing Search Engines for Young Users ; 4.1 Assessment Criteria; 4.1.1 Criteria for Matching the Motor Skills; 4.1.2 Criteria for Matching the Cognitive Skills; 4.2 Results; 4.2.1 Conformance with Motor Skills; 4.2.2 Conformance with the Cognitive Skills; 4.2.3 Presentation of Search Results; 4.2.4 Browsing versus Searching; 4.2.5 Navigational Style; 4.3 Summary and Discussion; Chapter 5 Large-scale Analysis of Children's Queries and Search Interactions; 5.1 Dataset; 5.2 Results; 5.3 Summary and Discussion Chapter 6 Differences in Usability and Perception of Targeted Web Search Engines between Children and Adults 6.1 Related Work; 6.2 User Study; 6.3 Study Results; 6.4 Summary and Discussion; Part III Tackling the Challenges ; Chapter 7 Search User Interface Design for Children ; 7.1 Conceptual Challenges and Possible Solutions; 7.2 Knowledge Journey Design; 7.3 Evaluation; 7.3.1 Study Design; 7.3.2 Study Results; 7.4 Voice-Controlled Search: Initial Study; 7.4.1 User Study; 7.5 Summary and Discussion; Chapter 8 Addressing User Diversity ; 8.1 Evolving Search User Interface 8.1.1 Mapping Function8.1.2 Evolving Skills; 8.1.3 Detection of User Abilities; 8.1.4 Design Concepts; 8.2 Adaptation of a Search User Interface towards User Needs; 8.2.1 Design & Implementation; 8.2.2 Search Input; 8.2.3 Result Output; 8.2.4 General Properties; 8.2.5 Configuration and Further Details; 8.3 Evaluation; 8.3.1 Study Design; 8.3.2 Study Results; 8.3.3 Preferred UI Settings; 8.3.4 User satisfaction; 8.4 Knowledge Journey Exhibit; 8.4.1 Hardware; 8.4.2 Frontend; 8.4.3 Backend; 8.5 Summary and Discussion; Chapter 9 Supporting Visual Searchers in Processing Search Results 9.1 Related Work
    Date
    1. 2.2016 18:25:22
  11. Gerzymisch-Arbogast, H.: Termini im Kontext : Verfahren zur Erschließung und Übersetzung der textspezifischen Bedeutung von fachlichenAusdrücken (1996) 0.04
    0.039117105 = product of:
      0.15646842 = sum of:
        0.15646842 = weight(_text_:term in 14) [ClassicSimilarity], result of:
          0.15646842 = score(doc=14,freq=6.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.71433705 = fieldWeight in 14, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=14)
      0.25 = coord(1/4)
    
    Content
    Enthält die Kapitel: On the status of the term as a systematic unit; the context-specific term model: theory and exemplifying application; theoretical differentiations and application problems; the ideally used term and possible contaminations in the context; naming contaminations; conceptual contaminations; one-dimensional and multidimensional contaminations in context; on the translation of terms in context
  12. Rockman, I.F.: Strengthening connections between information literacy, general education, and assessment efforts (2002) 0.04
    0.0385312 = product of:
      0.0770624 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 45) [ClassicSimilarity], result of:
              0.039952483 = score(doc=45,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 45, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=45)
          0.25 = coord(1/4)
        0.06707428 = product of:
          0.13414855 = sum of:
            0.13414855 = weight(_text_:assessment in 45) [ClassicSimilarity], result of:
              0.13414855 = score(doc=45,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.51759565 = fieldWeight in 45, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=45)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Academic librarians have a long and rich tradition of collaborating with discipline-based faculty members to advance the mission and goals of the library. Included in this tradition is the area of information literacy, a foundation skill for academic success and a key component of independent, lifelong learning. With the rise of the general education reform movement on many campuses resurfacing in the last decade, libraries have been able to move beyond course-integrated library instruction into a formal planning role for general education programmatic offerings. This article shows the value of 1. strategic alliances, developed over time, to establish information literacy as a foundation for student learning; 2. strong partnerships within a multicampus higher education system to promote and advance information literacy efforts; and 3. assessment as a key component of outcomes-based information literacy activities.
  13. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.03
    0.03457108 = product of:
      0.06914216 = sum of:
        0.005264202 = product of:
          0.021056809 = sum of:
            0.021056809 = weight(_text_:based in 664) [ClassicSimilarity], result of:
              0.021056809 = score(doc=664,freq=10.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.1488738 = fieldWeight in 664, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.25 = coord(1/4)
        0.06387796 = weight(_text_:term in 664) [ClassicSimilarity], result of:
          0.06387796 = score(doc=664,freq=16.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.29162687 = fieldWeight in 664, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.015625 = fieldNorm(doc=664)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Knowledge organization 34(2007) no.1, S.58-60 (P. Buizza): "This Nuovo soggettario is the first sign of subject indexing renewal in Italy. Italian subject indexing has been based until now on Soggettario per i cataloghi delle biblioteche italiane (Firenze, 1956), a list of preferred terms and see references, with suitable hierarchical subdivisions and cross references, derived from the subject catalogue of the National Library in Florence (BNCF). New headings later used in Bibliografia nazionale italiana (BNI) were added without references, nor indeed with any real maintenance. Systematic instructions on how to combine the terms are lacking: the indexer using this instrument is obliged to infer the order of terms absent from the lists by consulting analogous entries. Italian libraries are suffering from the limits of this subject catalogue: vocabulary is inadequate, obsolete and inconsistent, the syndetic structure incomplete and inaccurate, and the syntax ill-defined, poorly explained and unable to reflect complex subjects. In the nineties, the Subject Indexing Research Group (Gruppo di ricerca sull'indicizzazione per soggetto, GRIS) of the AIB (Italian Library Association) developed the indexing theory and some principles of PRECIS and drew up guidelines based on consistent principles for vocabulary, semantic relationships and subject string construction, the latter according to role syntax (Guida 1997). In overhauling the Soggettario, the National Library in Florence aimed at a comprehensive indexing system. (A report on the method and evolution of the work has been published in Knowledge Organization (Lucarelli 2005), while the feasibility study is available in Italian (Per un nuovo Soggettario 2002). Any usable terms from the old Soggettario will be transferred to the new system, while taking into consideration international norms and interlinguistic compatibility, as well as applications outside the immediate library context. The terms will be accessible via a suitable OPAC operating on the most advanced software.
    The guide Nuovo soggettario was presented on February 8' 2007 at a one-day seminar in the Palazzo Vecchio, Florence, in front of some 500 spellbound people. The Nuovo soggettario comes in two parts: the guide in book-form and an accompanying CD-ROM, by way of which a prototype of the thesaurus may be accessed on the Internet. In the former, rules are stated; the latter contains a pdf version of the guide and the first installment of the controlled vocabulary, which is to be further enriched and refined. Syntactic instructions (general application guidelines, as well as special annotations of particular terms) and the compiled subject strings file have yet to be added. The essentials of the new system are: 1) an analytic-synthetic approach, 2) use of terms (units of controlled vocabulary) and subject strings (which represent subjects by combining terms in linear order to form syntactic relationships), instead of main headings and subdivisions, 3) specificity of terms and strings, with a view to the co-extension of subject string and subject matter and 4) a clear distinction between semantic and syntactic relationships, with full control of them both. Basic features of the vocabulary include the uniformity and univocality of terms and thesaural management of a priori (semantic) relationships. Starting from its definition, each term can be categorially analyzed: four macro-categories are represented (agents, action, things, time), for which there are subcategories called facets (e.g., for actions: activities, disciplines, processes), which in turn have sub-facets. Morphological instructions conform to national and international standards, including BS 8723, ANSI/ NISO Z39.19 and the IFLA draft of Guidelines for multilingual thesauri, even for syntactic factorization. Different kinds of semantic relationships are represented thoroughly, and particular attention is paid to poly-hierarchies, which are used only in moderation: both top terms must actually be relevant. Node labels are used to specify the principle of division applied. Instance relationships are also used.
    An entry is structured so as to present all the essential elements of the indexing system. For each term are given: category, facet, related terms, Dewey interdisciplinary class number and, if necessary; definition or scope notes. Sources used are referenced (an appendix in the book lists those used in the current work). Historical notes indicate whenever a change of term has occurred, thus smoothing the transition from the old lists. In chapter 5, the longest one, detailed instructions with practical examples show how to create entries and how to relate terms; upper relationships must always be complete, right up to the top term, whereas hierarchies of related terms not yet fully developed may remain unfinished. Subject string construction consists in a double operation: analysis and synthesis. The former is the analysis of logical functions performed by single concepts in the definition of the subject (e.g., transitive actions, object, agent, etc.) or in syntactic relationships (transitive relationships and belonging relationship), so that each term for those concepts is assigned its role (e.g., key concept, transitive element, agent, instrument, etc.) in the subject string, where the core is distinct from the complementary roles (e.g., place, time, form, etc.). Synthesis is based on a scheme of nuclear and complementary roles, and citation order follows agreed-upon principles of one-to-one relationships and logical dependence. There is no standard citation order based on facets, in a categorial logic, but a flexible one, although thorough. For example, it is possible for a time term (subdivision) to precede an action term, when the former is related to the latter as the object of action: "Arazzi - Sec. 16.-17. - Restauro" [Tapestry - 16th-17th century - Restoration] (p. 126). So, even with more complex subjects, it is possible to produce perfectly readable strings covering the whole of the subject matter without splitting it into two incomplete and complementary headings. To this end, some unusual connectives are adopted, giving the strings a more discursive style.
    Thesaurus software is based on AgroVoc (http:// www.fao.org/aims/ag_intro.htm) provided by the FAO, but in modified form. Many searching options and contextualization within the full hierarchies are possible, so that the choice of morphology and syntax of terms and strings is made easier by the complete overview of semantic relationships. New controlled terms will be available soon, thanks to the work in progress - there are now 13,000 terms, of which 40 percent are non-preferred. In three months, free Internet access by CD-ROM will cease and a subscription will be needed. The digital version of old Soggettario and the corresponding unstructured lists of headings adopted in 1956-1985 are accessible together with the thesaurus, so that the whole vocabulary, old and new, will be at the fingertips of the indexer, who is forced to work with both tools during this transition period. In the future, it will be possible to integrate the thesaurus into library OPACs. The two parts form a very consistent and detailed resource. The guide is filled with examples; the accurate, clearly-expressed and consistent instructions are further enhanced by good use of fonts and type size, facilitating reading. The thesaurus is simple and quick to use, very rich, albeit only a prototype; see, for instance, a list of DDC numbers and related terms with their category and facet, and then entries, hierarchies and so on, and the capacity of the structure to show organized knowledge. The excellent outcome of a demanding experimentation, the intended guide welcomes in a new era of subject indexing in Italy and is highly recommended. The new method has been designed to be easily teachable to new and experimented indexers.
    Now BNI is beginning to use the new language, pointing the way for the adoption of Nuovo soggettario in Italian libraries: a difficult challenge whose success is not assured. To name only one issue: including all fields of study requires particular care in treating terms with different specialized meanings; cooperation of other libraries and institutions is foreseen. At the same time, efforts are being made to assure the system's interoperability outside the library world. It is clear that a great commitment is required. "Too complex a system!" say the naysayers. "Only at the beginning," the proponents reply. The new system goes against the mainstream, compared with the imitation of the easy way offered by search engines - but we know that they must enrich their devices to improve quality, just repeating the work on semantic and syntactic relationships that leads formal expressions to the meanings they are intended to communicate - and also compared with research to create automated devices supporting human work, for the need to simplify cataloguing. Here AI is not involved, but automation is widely used to facilitate and to support the conscious work of indexers guided by rules as clear as possible. The advantage of Nuovo soggettario is its combination of a thesaurus (a much-appreciated tool used across the world) with the equally widespread technique of subject-string construction, which is to say: the rational and predictable combination of the terms used. The appearance of this original, unparalleled working model may well be a great occasion in the international development of indexing, as, on one hand, the Nuovo soggettario uses a recognized tool (the thesaurus) and, on the other, by permitting both pre-coordination and post-coordination, it attempts to overcome the fragmentation of increasingly complex and specialized subjects into isolated, single-term descriptors. This is a serious proposition that merits consideration from both theoretical and practical points of view - and outside Italy, too."
  14. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.03
    0.032109328 = product of:
      0.064218655 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 2192) [ClassicSimilarity], result of:
              0.033293735 = score(doc=2192,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.25 = coord(1/4)
        0.055895224 = product of:
          0.11179045 = sum of:
            0.11179045 = weight(_text_:assessment in 2192) [ClassicSimilarity], result of:
              0.11179045 = score(doc=2192,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.43132967 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  15. Graf, P.: Term indexing (1996) 0.03
    0.03193898 = product of:
      0.12775593 = sum of:
        0.12775593 = weight(_text_:term in 5398) [ClassicSimilarity], result of:
          0.12775593 = score(doc=5398,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.58325374 = fieldWeight in 5398, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=5398)
      0.25 = coord(1/4)
    
    Abstract
    This monograph provides a comprehensive, well-written survey on term indexing in general and presents new indexing techniques for the retrieval and maintenance of data that help to overcome prgram degradation in automated reasoning systems. Theoretical foundations and applicational aspects are treated in detail; finally the PURR prover for parallel unit resulting resolution is discussed to demonstrate the importance of careful implementations
  16. Jarke, M.; Lenzerini, M.; Vassiliou, Y.: Fundamentals of data warehousing (1999) 0.03
    0.031786613 = product of:
      0.06357323 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 1302) [ClassicSimilarity], result of:
              0.03295912 = score(doc=1302,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 1302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1302)
          0.25 = coord(1/4)
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 1302) [ClassicSimilarity], result of:
              0.11066689 = score(doc=1302,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 1302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1302)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Data warehousing has captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains as an art rather than a science. This book presents the first comparative review of the state of the art and best current practice in data warehousing. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European DWQ project, it offers a conceptual framework by which the architecture and quality of datawarehousing efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence
  17. Jarke, M.; Lenzerini, M.; Vassiliou, Y.; Vassiliadis, PO.: Fundamentals of data warehousing (2003) 0.03
    0.031786613 = product of:
      0.06357323 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 1304) [ClassicSimilarity], result of:
              0.03295912 = score(doc=1304,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 1304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1304)
          0.25 = coord(1/4)
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 1304) [ClassicSimilarity], result of:
              0.11066689 = score(doc=1304,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 1304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1304)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Data warehousing has captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains as an art rather than a science. This book presents the first comparative review of the state of the art and best current practice in data warehousing. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European DWQ project, it offers a conceptual framework by which the architecture and quality of datawarehousing efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence
  18. Readings in information retrieval (1997) 0.03
    0.031514537 = product of:
      0.06302907 = sum of:
        0.0071358583 = product of:
          0.028543433 = sum of:
            0.028543433 = weight(_text_:based in 2080) [ClassicSimilarity], result of:
              0.028543433 = score(doc=2080,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.20180501 = fieldWeight in 2080, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2080)
          0.25 = coord(1/4)
        0.055893216 = weight(_text_:term in 2080) [ClassicSimilarity], result of:
          0.055893216 = score(doc=2080,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.2551735 = fieldWeight in 2080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2080)
      0.5 = coord(2/4)
    
    Content
    JOYCE, T. u. R.M. NEEDHAM: The thesaurus approach to information retrieval; LUHN, H.P.: The automatic derivation of information retrieval encodements from machine-readable texts; DOYLE, L.B.: Indexing and abstracting by association. Part 1; MARON, M.E. u. J.L. KUHNS: On relevance, probabilistic indexing and information retrieval; CLEVERDON, C.W.: The Cranfield test on index language devices; SALTON, G. u. M.E. LESK: Computer evaluation of indexing and text processing; HUTCHINS, W.J.: The concept of 'aboutness' in subject indexing; CLEVERDON, C.W. u. J. MILLS: The testing of index language devices; FOSKETT, D.J.: Thesaurus; DANIELS, P.J. u.a.: Using problem structures for driving human-computer dialogues; SARACEVIC, T.: Relevance: a review of and a framwork for thinking on the notion in information science; SARACEVIC, T. u.a. A study of information seeking and retrieving: I. Background and methodology; COOPER, W.S.: On selecting a measure of retrieval effectiveness, revisited; TAGEU-SUTCLIFFE, J.: The pragmatics of information retrieval experimentation, revisited; KEEN, E.M.: Presenting results of experimental retrieval comparisons; LANCASTER, F.W.: MEDLARS: report on the evaluation of its operating efficiency; HARMAN, D.K.: The TREC conferences; COOPER, W.S.: Getting beyond Boole; RIJSBERGEN, C.J. van: A non-classical logic for information retrieval; SALTON, G. u.a.: A vector space model for automatic indexing; ROBERTSON, S.E.: The probability ranking principle in IR; TURTLE, H. u. W.B. CROFT: Inference networks for document retrieval; BELKIN, N.J. u.a.: Ask for information retrieval: Part 1. Background and theory; PORTER, M.F.: Am algortihm for suffix stripping; SALTON, G. u. C. BUCKLEY: Term-weighting approaches in automatic text retrieval; SPRACK JONES, K.: Search term relevance weighting given little relevance information; CROFT, W.B. u. D.J. HARPER: Using probabilistic models of document retrieval without relevance information; ROBERTSON, S.E. u. S. WALKER: Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval; SALTON, G. u. C. BUCKLEY: Improving retrieval performance by relevance feedback; GRIFFITHS, A. u.a.: Using interdocument similarity information in document retrieval systems; SALTON, G. u. M.J. McGILL: The SMART and SIRE experimental retrieval systems; FOX, E.A. u. R.K. FRANCE: Architecture of an expert system for composite analysis, representation, and retrieval; HARMAN, D.: User-friendly systems instead of user-friendly front ends; WALKER, S.: The Okapi online catalogue research projects; CALLAN, J. u.a.: TREC and TIPSTER experiments with INQUERY; McCUNE, B. u.a.: RUBRIC: a system for rule-based information retrieval; TENOPIR, C. u. P. CAHN: TARGET and FREESTYLE: DIALOG and Mead join the relevance ranks; AGOSTI, M. u.a.: A hypertext environment for interacting with large databases; HULL, D.A. u. G. GREFENSTETTE: Querying across languages: a dictionary-based approach to multilingual information retrieval; SALTON, G. u.a.: Automatic analysis, theme generation, and summarization of machine-readable texts; SPARCK JONES, K. u.a.: Experiments in spoken document retrieval; ZHANG, H.J. u.a.: Video parsing, retrieval and browsing: an integrated and cantent-based solution; BIEBRICHER, N. u.a.: The automatic indexing system AIR/PHYS: from research to application; STRZALKOWSKI, T.: Robust text processing in automated information retrieval; HAYES, P.J. u.a.: A news story categorization system; RAU, L.F.: Conceptual information extraction and retrieval from natural language input; MARSH, E.: A production rule system for message summarisation; JOHNSON, F.C. u.a.: The application of linguistic processing to automatic abstract generation; SWANSON, D.R.: Historical note: information retrieval and the future of an illusion
  19. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.03
    0.02947553 = product of:
      0.05895106 = sum of:
        0.0049940604 = product of:
          0.019976242 = sum of:
            0.019976242 = weight(_text_:based in 3578) [ClassicSimilarity], result of:
              0.019976242 = score(doc=3578,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14123408 = fieldWeight in 3578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3578)
          0.25 = coord(1/4)
        0.053957 = weight(_text_:frequency in 3578) [ClassicSimilarity], result of:
          0.053957 = score(doc=3578,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.19518617 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.5 = coord(2/4)
    
    Content
    INHALT: Chris Biemann/Rainer Osswald: Automatische Erweiterung eines semantikbasierten Lexikons durch Bootstrapping auf großen Korpora - Ernesto William De Luca/Andreas Nürnberger: Supporting Mobile Web Search by Ontology-based Categorization - Rüdiger Gleim: HyGraph - Ein Framework zur Extraktion, Repräsentation und Analyse webbasierter Hypertextstrukturen - Felicitas Haas/Bernhard Schröder: Freges Grundgesetze der Arithmetik: Dokumentbaum und Formelwald - Ulrich Held/ Andre Blessing/Bettina Säuberlich/Jürgen Sienel/Horst Rößler/Dieter Kopp: A personalized multimodal news service -Jürgen Hermes/Christoph Benden: Fusion von Annotation und Präprozessierung als Vorschlag zur Behebung des Rohtextproblems - Sonja Hüwel/Britta Wrede/Gerhard Sagerer: Semantisches Parsing mit Frames für robuste multimodale Mensch-Maschine-Kommunikation - Brigitte Krenn/Stefan Evert: Separating the wheat from the chaff- Corpus-driven evaluation of statistical association measures for collocation extraction - Jörn Kreutel: An application-centered Perspective an Multimodal Dialogue Systems - Jonas Kuhn: An Architecture for Prallel Corpusbased Grammar Learning - Thomas Mandl/Rene Schneider/Pia Schnetzler/Christa Womser-Hacker: Evaluierung von Systemen für die Eigennamenerkennung im crosslingualen Information Retrieval - Alexander Mehler/Matthias Dehmer/Rüdiger Gleim: Zur Automatischen Klassifikation von Webgenres - Charlotte Merz/Martin Volk: Requirements for a Parallel Treebank Search Tool - Sally YK. Mok: Multilingual Text Retrieval an the Web: The Case of a Cantonese-Dagaare-English Trilingual e-Lexicon -
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
    Karel Pala: The Balkanet Experience - Peter M. Kruse/Andre Nauloks/Dietmar Rösner/Manuela Kunze: Clever Search: A WordNet Based Wrapper for Internet Search Engines - Rosmary Stegmann/Wolfgang Woerndl: Using GermaNet to Generate Individual Customer Profiles - Ingo Glöckner/Sven Hartrumpf/Rainer Osswald: From GermaNet Glosses to Formal Meaning Postulates -Aljoscha Burchardt/ Katrin Erk/Anette Frank: A WordNet Detour to FrameNet - Daniel Naber: OpenThesaurus: ein offenes deutsches Wortnetz - Anke Holler/Wolfgang Grund/Heinrich Petith: Maschinelle Generierung assoziativer Termnetze für die Dokumentensuche - Stefan Bordag/Hans Friedrich Witschel/Thomas Wittig: Evaluation of Lexical Acquisition Algorithms - Iryna Gurevych/Hendrik Niederlich: Computing Semantic Relatedness of GermaNet Concepts - Roland Hausser: Turn-taking als kognitive Grundmechanik der Datenbanksemantik - Rodolfo Delmonte: Parsing Overlaps - Melanie Twiggs: Behandlung des Passivs im Rahmen der Datenbanksemantik- Sandra Hohmann: Intention und Interaktion - Anmerkungen zur Relevanz der Benutzerabsicht - Doris Helfenbein: Verwendung von Pronomina im Sprecher- und Hörmodus - Bayan Abu Shawar/Eric Atwell: Modelling turn-taking in a corpus-trained chatbot - Barbara März: Die Koordination in der Datenbanksemantik - Jens Edlund/Mattias Heldner/Joakim Gustafsson: Utterance segmentation and turn-taking in spoken dialogue systems - Ekaterina Buyko: Numerische Repräsentation von Textkorpora für Wissensextraktion - Bernhard Fisseni: ProofML - eine Annotationssprache für natürlichsprachliche mathematische Beweise - Iryna Schenk: Auflösung der Pronomen mit Nicht-NP-Antezedenten in spontansprachlichen Dialogen - Stephan Schwiebert: Entwurf eines agentengestützten Systems zur Paradigmenbildung - Ingmar Steiner: On the analysis of speech rhythm through acoustic parameters - Hans Friedrich Witschel: Text, Wörter, Morpheme - Möglichkeiten einer automatischen Terminologie-Extraktion.
  20. Information science in transition (2009) 0.03
    0.029342528 = product of:
      0.03912337 = sum of:
        0.0029427784 = product of:
          0.011771114 = sum of:
            0.011771114 = weight(_text_:based in 634) [ClassicSimilarity], result of:
              0.011771114 = score(doc=634,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.083222985 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.25 = coord(1/4)
        0.028230337 = weight(_text_:term in 634) [ClassicSimilarity], result of:
          0.028230337 = score(doc=634,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.12888208 = fieldWeight in 634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.007950256 = product of:
          0.015900511 = sum of:
            0.015900511 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
              0.015900511 = score(doc=634,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.09672529 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Date
    22. 2.2013 11:35:35

Languages

Types

  • s 153
  • i 17
  • b 7
  • el 7
  • d 2
  • x 2
  • n 1
  • u 1
  • More… Less…

Themes

Subjects

Classifications