Search (1649 results, page 1 of 83)

  • × year_i:[2000 TO 2010}
  1. Warner, J.: Analogies between linguistics and information theory (2007) 0.14
    0.1358828 = product of:
      0.20382419 = sum of:
        0.17581621 = weight(_text_:analogies in 138) [ClassicSimilarity], result of:
          0.17581621 = score(doc=138,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.45301124 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0390625 = fieldNorm(doc=138)
        0.028007988 = product of:
          0.056015976 = sum of:
            0.056015976 = weight(_text_:group in 138) [ClassicSimilarity], result of:
              0.056015976 = score(doc=138,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2557028 = fieldWeight in 138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=138)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    An analogy is established between the syntagm and paradigm from Saussurean linguistics and the message and messages for selection from the information theory initiated by Claude Shannon. The analogy is pursued both as an end in itself and for its analytic value in understanding patterns of retrieval from full-text systems. The multivalency of individual words when isolated from their syntagm is contrasted with the relative stability of meaning of multiword sequences, when searching ordinary written discourse. The syntagm is understood as the linear sequence of oral and written language. Saussure's understanding of the word, as a unit that compels recognition by the mind, is endorsed, although not regarded as final. The lesser multivalency of multiword sequences is understood as the greater determination of signification by the extended syntagm. The paradigm is primarily understood as the network of associations a word acquires when considered apart from the syntagm. The restriction of information theory to expression or signals, and its focus on the combinatorial aspects of the message, is sustained. The message in the model of communication in information theory can include sequences of written language. Shannon's understanding of the written word, as a cohesive group of letters, with strong internal statistical influences, is added to the Saussurean conception. Sequences of more than one word are regarded as weakly correlated concatenations of cohesive units.
  2. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.11
    0.11224078 = product of:
      0.16836116 = sum of:
        0.1572548 = weight(_text_:analogies in 149) [ClassicSimilarity], result of:
          0.1572548 = score(doc=149,freq=10.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.40518558 = fieldWeight in 149, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.011106358 = product of:
          0.022212716 = sum of:
            0.022212716 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
              0.022212716 = score(doc=149,freq=6.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.1340265 = fieldWeight in 149, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=149)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Rez. in: JASIST 57(2006) no.14, S.1979-1980 (J.G. Williams): "The Introduction and Part I of this book presents the world of computing with a historical and philosophical overview of computers, computer applications, networks, the World Wide Web, and eBusiness based on the notion that the real world places constraints on the application of these technologies and without a formalized approach, the benefits of these technologies cannot be realized. The concepts of real world constraints and the need for formalization are used as the cornerstones for a building-block approach for helping the reader understand computing, networking, the World Wide Web, and the applications that use these technologies as well as all the possibilities that these technologies hold for the future. The author's building block approach to understanding computing, networking and application building makes the book useful for science, business, and engineering students taking an introductory computing course and for social science students who want to understand more about the social impact of computers, the Internet, and Web technology. It is useful as well for managers and designers of Web and ebusiness applications, and for the general public who are interested in understanding how these technologies may impact their lives, their jobs, and the social context in which they live and work. The book does assume some experience and terminology in using PCs and the Internet but is not intended for computer science students, although they could benefit from the philosophical basis and the diverse viewpoints presented. The author uses numerous analogies from domains outside the area of computing to illustrate concepts and points of view that make the content understandable as well as interesting to individuals without any in-depth knowledge of computing, networking, software engineering, system design, ebusiness, and Web design. These analogies include interesting real-world events ranging from the beginning of railroads, to Henry Ford's mass produced automobile, to the European Space Agency's loss of the 7 billion dollar Adriane rocket, to travel agency booking, to medical systems, to banking, to expanding democracy. The book gives the pros and cons of the possibilities offered by the Internet and the Web by presenting numerous examples and an analysis of the pros and cons of these technologies for the examples provided. The author shows, in an interesting manner, how the new economy based on the Internet and the Web affects society and business life on a worldwide basis now and how it will affect the future, and how society can take advantage of the opportunities that the Internet and the Web offer.
    Each chapter provides suggestions for exercises and discussions, which makes the book useful as a textbook. The suggestions in the exercise and discussion section at the end of each chapter are simply delightful to read and provide a basis for some lively discussion and fun exercises by students. These exercises appear to be well thought out and are intended to highlight the content of the chapter. The notes at the end of chapters provide valuable data that help the reader to understand a topic or a reference to an entity that the reader may not know. Chapter 1 on "formalism," chapter 2 on "symbolic data," chapter 3 on "constraints on technology," and chapter 4 on "cultural constraints" are extremely well presented and every reader needs to read these chapters because they lay the foundation for most of the chapters that follow. The analogies, examples, and points of view presented make for some really interesting reading and lively debate and discussion. These chapters comprise Part 1 of the book and not only provide a foundation for the rest of the book but could be used alone as the basis of a social science course on computing, networking, and the Web. Chapters 5 and 6 on Internet protocols and the development of Web protocols may be more detailed and filled with more acronyms than the average person wants to deal with but content is presented with analogies and examples that make it easier to digest. Chapter 7 will capture most readers attention because it discusses how e-mail works and many of the issues with e-mail, which a majority of people in developed countries have dealt with. Chapter 8 is also one that most people will be interested in reading because it shows how Internet browsers work and the many issues such as security associated with these software entities. Chapter 9 discusses the what, why, and how of the World Wide Web, which is a lead-in to chapter 10 on "Searching the Web" and chapter 11 on "Organizing the Web-Portals," which are two chapters that even technically oriented people should read since it provides information that most people outside of information and library science are not likely to know.
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.
    The book is quite large with over 400 pages and covers a myriad of topics, which is probably more than any one course could cover but an instructor could pick and choose those chapters most appropriate to the course content. The book could be used for multiple courses by selecting the relevant topics. I enjoyed the first person, rather down to earth, writing style and the number of examples and analogies that the author presented. I believe most people could relate to the examples and situations presented by the author. As a teacher in Information Technology, the discussion questions at the end of the chapters and the case studies are a valuable resource as are the end of chapter notes. I highly recommend this book for an introductory course that combines computing, networking, the Web, and ebusiness for Business and Social Science students as well as an introductory course for students in Information Science, Library Science, and Computer Science. Likewise, I believe IT managers and Web page designers could benefit from selected chapters in the book."
  3. Trauth, E.M.: Qualitative research in IS : issues and trends (2001) 0.09
    0.093949094 = product of:
      0.28184727 = sum of:
        0.28184727 = sum of:
          0.17925112 = weight(_text_:group in 5848) [ClassicSimilarity], result of:
            0.17925112 = score(doc=5848,freq=2.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.8182489 = fieldWeight in 5848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.125 = fieldNorm(doc=5848)
          0.10259614 = weight(_text_:22 in 5848) [ClassicSimilarity], result of:
            0.10259614 = score(doc=5848,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.61904186 = fieldWeight in 5848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=5848)
      0.33333334 = coord(1/3)
    
    Date
    25. 3.2003 15:35:22
    Imprint
    Hershey, PA : Idea Group Publ.
  4. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.07
    0.07046181 = product of:
      0.21138543 = sum of:
        0.21138543 = sum of:
          0.13443834 = weight(_text_:group in 4890) [ClassicSimilarity], result of:
            0.13443834 = score(doc=4890,freq=2.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.6136867 = fieldWeight in 4890, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.09375 = fieldNorm(doc=4890)
          0.0769471 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
            0.0769471 = score(doc=4890,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.46428138 = fieldWeight in 4890, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4890)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2007 17:02:22
  5. Urro, R.; Winiwarter, W.: Specifying ontologies : Linguistic aspects in problem-driven knowledge engineering (2001) 0.07
    0.07032649 = product of:
      0.21097946 = sum of:
        0.21097946 = weight(_text_:analogies in 3263) [ClassicSimilarity], result of:
          0.21097946 = score(doc=3263,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.5436135 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
      0.33333334 = coord(1/3)
    
    Abstract
    The WWW includes on various levels systems of signs, not all of which are standardized as necessary for a real Semantic Web and not all of which can be standardized. Linguistic theories can contribute not only to the thus needed translation between sign systems, be they natural language systems or otherwise structured systems of knowledge representation, but also, of course, to standardization efforts. Within the current EC3 research framework for x-commerce, linguistic theories will play their part as they provide modeling analogies and patterns for the construction of a central knowledge base.
  6. Frohmann, B.: Documentation redux : prolegomenon to (another) philosophy of information (2004) 0.07
    0.07032649 = product of:
      0.21097946 = sum of:
        0.21097946 = weight(_text_:analogies in 826) [ClassicSimilarity], result of:
          0.21097946 = score(doc=826,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.5436135 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=826)
      0.33333334 = coord(1/3)
    
    Abstract
    A philosophy of information is grounded in a philosophy of documentation. Nunberg's conception of the phenomenon of information heralds a shift of attention away from the question "What is information?" toward a critical investigation of the sources and legitimation of the question itself. Analogies between Wittgenstein's deconstruction of philosophical accounts of meaning and a corresponding deconstruction of philosophical accounts of information suggest that because the informativeness of a document depends on certain kinds of practices with it, and because information emerges as an effect of such practices, documentary practices are ontologically primary to information. The informativeness of documents therefore refers us to the properties of documentary practices. These fall into four broad categories: their materiality; their institutional sites; the ways in which they are socially disciplined; and their historical contingency. Two examples from early modern science, which contrast the scholastic documentary practices of continental natural philosophers to those of their peers in Restoration England, illustrate the richness of the factors that must be taken into account to understand how documents become informing.
  7. Warner, J.: Linguistics and information theory : analytic advantages (2007) 0.07
    0.07032649 = product of:
      0.21097946 = sum of:
        0.21097946 = weight(_text_:analogies in 77) [ClassicSimilarity], result of:
          0.21097946 = score(doc=77,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.5436135 = fieldWeight in 77, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=77)
      0.33333334 = coord(1/3)
    
    Abstract
    The analytic advantages of central concepts from linguistics and information theory, and the analogies demonstrated between them, for understanding patterns of retrieval from full-text indexes to documents are developed. The interaction between the syntagm and the paradigm in computational operations on written language in indexing, searching, and retrieval is used to account for transformations of the signified or meaning between documents and their representation and between queries and documents retrieved. Characteristics of the message, and messages for selection for written language, are brought to explain the relative frequency of occurrence of words and multiple word sequences in documents. The examples given in the companion article are revisited and a fuller example introduced. The signified of the sequence stood for, the term classically used in the definitions of the sign, as something standing for something else, can itself change rapidly according to its syntagm. A greater than ordinary discourse understanding of patterns in retrieval is obtained.
  8. Makri, S.; Blandford, A.; Gow, J.; Rimmer, J.; Warwick, C.; Buchanan, G.: ¬A library or just another information resource? : a case study of users' mental models of taditional and digital libraries (2007) 0.07
    0.07032649 = product of:
      0.21097946 = sum of:
        0.21097946 = weight(_text_:analogies in 141) [ClassicSimilarity], result of:
          0.21097946 = score(doc=141,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.5436135 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=141)
      0.33333334 = coord(1/3)
    
    Abstract
    A user's understanding of the libraries they work in, and hence of what they can do in those libraries, is encapsulated in their "mental models" of those libraries. In this article, we present a focused case study of users' mental models of traditional and digital libraries based on observations and interviews with eight participants. It was found that a poor understanding of access restrictions led to risk-averse behavior, whereas a poor understanding of search algorithms and relevance ranking resulted in trial-and-error behavior. This highlights the importance of rich feedback in helping users to construct useful mental models. Although the use of concrete analogies for digital libraries was not widespread, participants used their knowledge of Internet search engines to infer how searching might work in digital libraries. Indeed, most participants did not clearly distinguish between different kinds of digital resource, viewing the electronic library catalogue, abstracting services, digital libraries, and Internet search engines as variants on a theme.
  9. Definition of the CIDOC Conceptual Reference Model (2003) 0.07
    0.067708746 = product of:
      0.20312622 = sum of:
        0.20312622 = sum of:
          0.16465268 = weight(_text_:group in 1652) [ClassicSimilarity], result of:
            0.16465268 = score(doc=1652,freq=12.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.7516097 = fieldWeight in 1652, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.03847355 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.03847355 = score(doc=1652,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.33333334 = coord(1/3)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
    Editor
    ICOM/CIDOC Documentation Standards Group
    Issue
    Version 3.4.9 - 30.11.2003. Produced by the ICOM/CIDOC Documentation Standards Group, continued by the CIDOC CRM Special Interest Group.
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.062937215 = product of:
      0.094405815 = sum of:
        0.07516904 = product of:
          0.22550711 = sum of:
            0.22550711 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22550711 = score(doc=562,freq=2.0), product of:
                0.40124533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047327764 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.019236775 = product of:
          0.03847355 = sum of:
            0.03847355 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03847355 = score(doc=562,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  11. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.06
    0.060432576 = product of:
      0.09064886 = sum of:
        0.06264087 = product of:
          0.1879226 = sum of:
            0.1879226 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.1879226 = score(doc=692,freq=2.0), product of:
                0.40124533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047327764 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.33333334 = coord(1/3)
        0.028007988 = product of:
          0.056015976 = sum of:
            0.056015976 = weight(_text_:group in 692) [ClassicSimilarity], result of:
              0.056015976 = score(doc=692,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2557028 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  12. OWL Web Ontology Language Test Cases (2004) 0.06
    0.059349254 = product of:
      0.17804776 = sum of:
        0.17804776 = sum of:
          0.1267497 = weight(_text_:group in 4685) [ClassicSimilarity], result of:
            0.1267497 = score(doc=4685,freq=4.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.5785894 = fieldWeight in 4685, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.05129807 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.05129807 = score(doc=4685,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.33333334 = coord(1/3)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
  13. Weitz, J.: Cataloger's judgment : music cataloging questions and answers from the music OCLC users group newsletter (2003) 0.06
    0.058718182 = product of:
      0.17615454 = sum of:
        0.17615454 = sum of:
          0.11203195 = weight(_text_:group in 4591) [ClassicSimilarity], result of:
            0.11203195 = score(doc=4591,freq=2.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.5114056 = fieldWeight in 4591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.078125 = fieldNorm(doc=4591)
          0.06412259 = weight(_text_:22 in 4591) [ClassicSimilarity], result of:
            0.06412259 = score(doc=4591,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.38690117 = fieldWeight in 4591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4591)
      0.33333334 = coord(1/3)
    
    Date
    25.11.2005 18:22:29
  14. Gnoli, C.: ISKO News (2007) 0.06
    0.058605403 = product of:
      0.17581621 = sum of:
        0.17581621 = weight(_text_:analogies in 1092) [ClassicSimilarity], result of:
          0.17581621 = score(doc=1092,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.45301124 = fieldWeight in 1092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1092)
      0.33333334 = coord(1/3)
    
    Abstract
    Darin: "However, John Sowa (Vivomind, USA) argued in his speech that the formalized approach, already undertaken by the pioneering project Cyc now having run for 23 years, is not the best way to analyze complex systems. People don't really use axioms in their cognitive processes (even mathematicians first get an idea intuitively, then work on axioms and proofs only at the moment of writing papers). To map between different ontologies, the Vivomind Analogy Engine throws axioms out, and searches instead for analogies in their structures. Analogy is a pragmatic human faculty using a combination of the three logical procedures of deduction, induction, and abduction. Guarino comments that people can communicate without need of axioms as they share a common context, but in order to teach computers how to operate, the requirements are different: he would not trust an airport control system working by analogy."
  15. Anderson, C.: ¬The end of theory : the data deluge makes the scientific method obsolete (2008) 0.06
    0.058605403 = product of:
      0.17581621 = sum of:
        0.17581621 = weight(_text_:analogies in 2819) [ClassicSimilarity], result of:
          0.17581621 = score(doc=2819,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.45301124 = fieldWeight in 2819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2819)
      0.33333334 = coord(1/3)
    
    Abstract
    So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don't have to settle for wrong models. Indeed, they don't have to settle for models at all. Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age. The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to - well, at petabytes we ran out of organizational analogies.
  16. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.06
    0.056945622 = product of:
      0.17083687 = sum of:
        0.17083687 = sum of:
          0.116427034 = weight(_text_:group in 5275) [ClassicSimilarity], result of:
            0.116427034 = score(doc=5275,freq=6.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.53146833 = fieldWeight in 5275, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
          0.054409824 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
            0.054409824 = score(doc=5275,freq=4.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.32829654 = fieldWeight in 5275, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply available aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently in the analysis of research performance. In this sense, the coherent set of indicators acts as a measuring instrument. Better insight into the statistical properties of a measuring instrument is necessary to enable assessment of the instrument itself. The most basic distribution in bibliometric analysis is the distribution of citations over publications, and this distribution is very skewed. Nevertheless, we clearly observe the working of the central limit theorem and find that at the level of research groups the distribution functions of the main indicators, particularly the journal- normalized and the field-normalized indicators, approach normal distributions. The results of our study underline the importance of the idea of group oeuvre, that is, the role of sets of related publications as a unit of analysis.
    Date
    22. 7.2006 16:20:22
  17. Zumer, M.: Guidelines for (electronic) national bibliographies : work in progress (2005) 0.05
    0.0519306 = product of:
      0.15579179 = sum of:
        0.15579179 = sum of:
          0.110905975 = weight(_text_:group in 4346) [ClassicSimilarity], result of:
            0.110905975 = score(doc=4346,freq=4.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.5062657 = fieldWeight in 4346, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4346)
          0.04488581 = weight(_text_:22 in 4346) [ClassicSimilarity], result of:
            0.04488581 = score(doc=4346,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.2708308 = fieldWeight in 4346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4346)
      0.33333334 = coord(1/3)
    
    Abstract
    The Working group on Guidelines for (electronic) national bibliographies has started the work with an analysis of users and use of national bibliographies (NB). In addition to the well known importance of NB for libraries and librarians, other users and their requirements were identified. The results are presented and discussed; both existing and potential users and use were taken into account. The group will continue the work by specifying the functionality to support the various needs of different users.
    Date
    1.11.2005 18:56:22
  18. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.05
    0.05079958 = product of:
      0.15239874 = sum of:
        0.15239874 = sum of:
          0.1267497 = weight(_text_:group in 1447) [ClassicSimilarity], result of:
            0.1267497 = score(doc=1447,freq=16.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.5785894 = fieldWeight in 1447, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
          0.025649035 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.025649035 = score(doc=1447,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.15476047 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
      0.33333334 = coord(1/3)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  19. Madison, O.M.A.: ¬The IFLA Functional Requirements for Bibliographic Records : international standards for bibliographic control (2000) 0.05
    0.048031084 = product of:
      0.14409325 = sum of:
        0.14409325 = sum of:
          0.11203195 = weight(_text_:group in 187) [ClassicSimilarity], result of:
            0.11203195 = score(doc=187,freq=8.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.5114056 = fieldWeight in 187, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.0390625 = fieldNorm(doc=187)
          0.032061294 = weight(_text_:22 in 187) [ClassicSimilarity], result of:
            0.032061294 = score(doc=187,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.19345059 = fieldWeight in 187, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=187)
      0.33333334 = coord(1/3)
    
    Abstract
    The formal charge for the IFLA study involving international bibliography standards was to delineate the functions that are performed by the bibliographic record with respect to various media, applications, and user needs. The method used was the entity relationship analysis technique. Three groups of entities that are the key objects of interest to users of bibliographic records were defined. The primary group contains four entities: work, expression, manifestation, and item. The second group includes entities responsible for the intellectual or artistic content, production, or ownership of entities in the first group. The third group includes entities that represent concepts, objects, events, and places. In the study we identified the attributes associated with each entity and the relationships that are most important to users. The attributes and relationships were mapped to the functional requirements for bibliographic records that were defined in terms of four user tasks: to find, identify, select, and obtain. Basic requirements for national bibliographic records were recommended based on the entity analysis. The recommendations of the study are compared with two standards, AACR (Anglo-American Cataloguing Rules) and the Dublin Core, to place them into pragmatic context. The results of the study are being used in the review of the complete set of ISBDs as the initial benchmark in determining data elements for each format.
    Date
    10. 9.2000 17:38:22
  20. Tredinnick, L.: Why Intranets fail (and how to fix them) : a practical guide for information professionals (2004) 0.05
    0.046884324 = product of:
      0.14065297 = sum of:
        0.14065297 = weight(_text_:analogies in 4499) [ClassicSimilarity], result of:
          0.14065297 = score(doc=4499,freq=2.0), product of:
            0.38810563 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.047327764 = queryNorm
            0.362409 = fieldWeight in 4499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.03125 = fieldNorm(doc=4499)
      0.33333334 = coord(1/3)
    
    Abstract
    This book is a practical guide to some of the common problems associated with Intranets, and solutions to those problems. The book takes a unique end-user perspective an the role of intranets within organisations. It explores how the needs of the end-user very often conflict with the needs of the organisation, creatiog a confusion of purpose that impedes the success of intranet. It sets out clearly why intranets cannot be thought of as merely internal Internets, and require their own management strategies and approaches. The book draws an a wide range of examples and analogies from a variety of contexts to set-out in a clear and concise way the issues at the heart of failing intranets. It presents step-by-step solutions with universal application. Each issue discussed is accompanied by short practical suggestions for improved intranet design and architecture.

Languages

Types

  • a 1372
  • m 171
  • el 119
  • s 60
  • b 26
  • x 15
  • i 13
  • n 6
  • r 3
  • More… Less…

Themes

Subjects

Classifications