Search (21410 results, page 1071 of 1071)

  1. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    2.4351051E-4 = product of:
      0.0024351052 = sum of:
        0.0024351052 = product of:
          0.007305315 = sum of:
            0.007305315 = weight(_text_:online in 1875) [ClassicSimilarity], result of:
              0.007305315 = score(doc=1875,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.08382809 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    "Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. "I consider the pixel data in images and video to be the dark matter of the Internet," said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. "We are now starting to illuminate it." Dr. Li and Mr. Karpathy published their research as a Stanford University technical report. The Google team published their paper on arXiv.org, an open source site hosted by Cornell University.
  2. Learning XML (2003) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 3101) [ClassicSimilarity], result of:
              0.005844252 = score(doc=3101,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 3101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
  3. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 3102) [ClassicSimilarity], result of:
              0.005844252 = score(doc=3102,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 3102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3102)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
  4. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 175) [ClassicSimilarity], result of:
              0.005844252 = score(doc=175,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=175)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    The third section is opened by an emblematic article of Barbara Tillett about the impact the implementation of the FRBR model has on future library catalogues. The novelty brought by the model is expected to influence both the cataloguing codes and practice and the design of the new library systems. Implementation issues are also treated by Maja flumer and Gerhard Riesthuis in an article describing the application of the FRBR model to the Slovenian national bibliography. Maja flumer reports another instance of the implementation of FRBR, namely the European Research Initiative. The author describes the initiative originating from FLAG (European Library Automation Group) and IFLA and proposes the agenda of future research and action. The next experiment described by Thomas Hickey and Edward O'Neil brings to our attention an algorithm developed at OCLC that identifies sets of works for collocation purposes. By so doing, the FRBR model is applied to the aggregate works existing in the huge and rapidly growing OCLC's WorldCat. An application of the FRBR conceptual approach to UNESCO's ISIS retrieval software is presented by Roberto Sturman as his personal experiment. The database structure and the relationships between entities are explained together with their functionalities in three different interfaces. The practical benefits of applying the FRBR model to enhanced displays of bibliographic records in online catalogues are explored in the article of Jacqueline Radebaugh and Corey Keith. The FRBR Display Tool, based on XML technologies, was "developed to transform bibliographic data found in MARC 21 record files into meaningful displays by grouping them into [...] FRBR entities" (p. 271). The last section, by Dick Miller, is dedicated to a rather futuristic view of cataloguing, which the editor calls "a revolutionary alternative to the comparatively conservative and `traditional' approach that FRBR represents" (p. 11). XOBIS, like the previously mentioned application, uses XML technologies to reorganize bibliographic and authority data elements into an integrated structure.
  5. Design and usability of digital libraries : case studies in the Asia-Pacific (2005) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 93) [ClassicSimilarity], result of:
              0.005844252 = score(doc=93,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=93)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Even though each chapter is short, the entire book covers a vast amount of information. This book is meant to provide an introductory sampling of issues discovered through various case studies, not provide an in-depth report on each of them. The references included at the end of each chapter are particularly helpful because they lead to more information about issues that the particular case study raises. By including a list of references at the end of each chapter, the authors want to encourage interested readers to pursue more about the topics presented. This book clearly offers many opportunities to explore issues on the same topics further. The appendix at the end of the book also contains additional useful information that readers might want to consult if they are interested in finding out more about digital libraries. Selected resources are provided in the form of a list that includes such topics as journal special issues, digital library conference proceedings, and online databases. A key issue that this book brings up is how to include different cultural materials in digital libraries. For example, in chapter 16, the concerns and issues surrounding Maori heritage materials are introduced. The terms and concepts used when classifying Maori resources are so delicate that the meaning behind them can completely change with even a slight variation. Preserving other cultures correctly is important, and researchers need to consider the consequences of any errors made during digitization of resources. Another example illustrating the importance of including information about different cultures is presented in chapter 9. The authors talk about the various different languages used in the world and suggest ways to integrate them into information retrieval systems. As all digital library researchers know, the ideal system would allow all users to retrieve results in their own languages. The authors go on to discuss a few approaches that can be taken to assist with overcoming this challenge.
  6. Introducing information management : an information research reader (2005) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 440) [ClassicSimilarity], result of:
              0.005844252 = score(doc=440,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=440)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Rez. in: JASIST 58(2007) no.4, S.607-608 (A.D. Petrou): "One small example of a tension in the book's chapters can be expressed as: What exactly falls under information management (IM) as a domain of study? Is it content and research about a traditional life cycle of information, or is it the latter and also any other important issue in information research, such as culture, virtual reality, and online behavior, and communities of practice? In chapter 13, T.D. Wilson states, "Information management is the management of the life cycle to the point of delivery to the information user" (p. 164), yet as he also recognizes, other aspects of information are now included as IM's study matter. On p. 163 of the same chapter, Wilson offers Figure 12.2, titled "The extended life cycle of information." The life cycle in this case includes the following information stages: acquisition, organization, storage, retrieval, access and lending, and dissemination. All of these six stages Wilson labels, inside the circle, as IM. The rest of the extended information life cycle is information use, which includes use, sharing, and application. Chapter 3's author, Gunilla Widen-Wulff, quoting Davenport (1994), states "effective IM is about helping people make effective use of the information, rather than the machines" (p. 31). Widen-Wulff, however, addresses IM from an information culture perspective. To review the book's critical content, IM definitions and research methodology and methods reported in chapters are critically summarized next. This will provide basic information for anyone interested in using the book as an information research reader.
  7. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 664) [ClassicSimilarity], result of:
              0.005844252 = score(doc=664,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  8. Broughton, V.: Essential classification (2004) 0.00
    1.9480841E-4 = product of:
      0.0019480841 = sum of:
        0.0019480841 = product of:
          0.005844252 = sum of:
            0.005844252 = weight(_text_:online in 2824) [ClassicSimilarity], result of:
              0.005844252 = score(doc=2824,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.067062475 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  9. Bruce, H.: ¬The user's view of the Internet (2002) 0.00
    1.945225E-4 = product of:
      0.0019452249 = sum of:
        0.0019452249 = product of:
          0.0058356747 = sum of:
            0.0058356747 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
              0.0058356747 = score(doc=4344,freq=2.0), product of:
                0.1005541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028714733 = queryNorm
                0.058035173 = fieldWeight in 4344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01171875 = fieldNorm(doc=4344)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
  10. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    1.7045737E-4 = product of:
      0.0017045736 = sum of:
        0.0017045736 = product of:
          0.005113721 = sum of:
            0.005113721 = weight(_text_:online in 6119) [ClassicSimilarity], result of:
              0.005113721 = score(doc=6119,freq=2.0), product of:
                0.08714639 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.028714733 = queryNorm
                0.058679666 = fieldWeight in 6119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Footnote
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."

Authors

Languages

Types

  • a 16427
  • m 2761
  • el 1292
  • s 818
  • x 643
  • i 200
  • r 173
  • b 101
  • ? 89
  • n 67
  • p 51
  • l 29
  • d 26
  • h 25
  • u 19
  • fi 12
  • z 4
  • v 2
  • A 1
  • EL 1
  • au 1
  • ms 1
  • More… Less…

Themes

Subjects

Classifications