Search (9485 results, page 474 of 475)

  • × language_ss:"e"
  1. Bertolucci, K.: Happiness is taxonomy : four structures for Snoopy - libraries' method of categorizing and classification (2003) 0.00
    0.0017307586 = product of:
      0.0051922756 = sum of:
        0.0051922756 = product of:
          0.015576826 = sum of:
            0.015576826 = weight(_text_:online in 1212) [ClassicSimilarity], result of:
              0.015576826 = score(doc=1212,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.100593716 = fieldWeight in 1212, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1212)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Dewey and the Library of Congress The late 19th and early 20th centuries were a hotbed of intellectual activity for library categorizers. First Melvil Dewey developed his decimal system. Then the Library of Congress (LC) adapted Charles Ammi Cutter's alphanumeric system for its collection. Dewey, the only librarian popularly known for librarianship, had a healthy ego and placed information science at the very beginning of his classifications. The librarians at LC followed Cutter and relegated their profession to the back of their own bus, in the Zs. These two systems became the primary classifications accepted by the library community. I was once chastised at an SLA meeting for daring to design my own systems, and library schools that mainly train people for public and academic institutions reinforce this idea. In addition, LC provides cataloging and call numbers for almost every book commercially published in the United States and quite a few international publications. This is a seductive strategy for libraries that have little money and little time. These two systems contain drawbacks for special libraries. Let's see how they treat Snoopy. I'll be using Dewey for this exercise. Dewey has an index, which facilitates classification analysis. In addition, LC is a larger system, and we have space considerations here. However, other than length, call number building, and self-esteem, there is not much difference in the two theories. Figure 2 shows selected Dewey classifications for Snoopy, beagles, dogs, and animals (Melvil Dewey. Dewey Decimal Classification and Relative Index. 21st ed. Edited by Joan S. Mitchell, et al. Albany, NY: OCLC Online Computer Library Center, 1996). The call numbers are removed to emphasize hierarchy rather than notation. There are 234 categories. Both Dewey and LC are designed to describe the whole of human knowledge. For historic reasons, they do this from the perspective of an educated white male in 19th century America. This perspective presents some problems if your specialty is Snoopy. In "Generalities," newspaper cartoon strips are filed away under "Miscellaneous information, advice, amusement." However, a collection of Charles Schulz cartoons would be shelved way over in "The Arts [right arrow] Drawing and decorative arts," thereby separating two almost equal subjects by a very wide distance. The generic vocabulary required to describe all of human knowledge is also problematic for specialists. In "The Arts [right arrow] Standard subdivisions of fine and decorative arts and iconography," there are five synonyms for miscellaneous before we get to a real subject. Then it's another six facets to get to the dogs.
  2. Stahl, G.: Group cognition : computer support for building collaborative knowledge (2006) 0.00
    0.0017307586 = product of:
      0.0051922756 = sum of:
        0.0051922756 = product of:
          0.015576826 = sum of:
            0.015576826 = weight(_text_:online in 2391) [ClassicSimilarity], result of:
              0.015576826 = score(doc=2391,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.100593716 = fieldWeight in 2391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2391)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This book explores the software design, social practices, and collaboration theory that would be needed to support group cognition - collective knowledge that is constructed by small groups online. Innovative uses of global and local networks of linked computers make new ways of collaborative working, learning, and acting possible. In "Group Cognition", Gerry Stahl explores the technological and social reconfigurations that are needed to achieve computer-supported collaborative knowledge building - group cognition that transcends the limits of individual cognition. Computers can provide active media for social group cognition where ideas grow through the interactions within groups of people; software functionality can manage group discourse that results in shared understandings, new meanings, and collaborative learning. Stahl offers software design prototypes, analyses empirical instances of collaboration, and elaborates a theory of collaboration that takes the group, rather than the individual, as the unit of analysis. Stahl's design studies concentrate on mechanisms to support group formation, multiple interpretive perspectives, and the negotiation of group knowledge in applications as varied as collaborative curriculum development by teachers, writing summaries by students, and designing space voyages by NASA engineers. His empirical analysis shows how, in small-group collaborations, the group constructs intersubjective knowledge that emerges from and appears in the discourse itself. This discovery of group meaning becomes the springboard for Stahl's outline of a social theory of collaborative knowing. Stahl also discusses such related issues as the distinction between meaning making at the group level and interpretation at the individual level, appropriate research methodology, philosophical directions for group cognition theory, and suggestions for further empirical work.
  3. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.00
    0.0017193872 = product of:
      0.0051581617 = sum of:
        0.0051581617 = product of:
          0.015474485 = sum of:
            0.015474485 = weight(_text_:retrieval in 691) [ClassicSimilarity], result of:
              0.015474485 = score(doc=691,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.10026272 = fieldWeight in 691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Concepts and Language: The Role of Conceptual Structure in Human Evolution (Keith Devlin) - Concepts in Linguistics - Concepts in Natural Language (Gisela Harras) - Patterns, Schemata, and Types: Author Support through Formalized Experience (Felix H. Gatzemeier) - Conventions and Notations for Knowledge Representation and Retrieval (Philippe Martin) - Conceptual Ontology: Ontology, Metadata, and Semiotics (John F. Sowa) - Pragmatically Yours (Mary Keeler) - Conceptual Modeling for Distributed Ontology Environments (Deborah L. McGuinness) - Discovery of Class Relations in Exception Structured Knowledge Bases (Hendra Suryanto, Paul Compton) - Conceptual Graphs: Perspectives: CGs Applications: Where Are We 7 Years after the First ICCS ? (Michel Chein, David Genest) - The Engineering of a CC-Based System: Fundamental Issues (Guy W. Mineau) - Conceptual Graphs, Metamodeling, and Notation of Concepts (Olivier Gerbé, Guy W. Mineau, Rudolf K. Keller) - Knowledge Representation and Reasonings: Based on Graph Homomorphism (Marie-Laure Mugnier) - User Modeling Using Conceptual Graphs for Intelligent Agents (James F. Baldwin, Trevor P. Martin, Aimilia Tzanavari) - Towards a Unified Querying System of Both Structured and Semi-structured Imprecise Data Using Fuzzy View (Patrice Buche, Ollivier Haemmerlé) - Formal Semantics of Conceptual Structures: The Extensional Semantics of the Conceptual Graph Formalism (Guy W. Mineau) - Semantics of Attribute Relations in Conceptual Graphs (Pavel Kocura) - Nested Concept Graphs and Triadic Power Context Families (Susanne Prediger) - Negations in Simple Concept Graphs (Frithjof Dau) - Extending the CG Model by Simulations (Jean-François Baget) - Contextual Logic and Formal Concept Analysis: Building and Structuring Description Logic Knowledge Bases: Using Least Common Subsumers and Concept Analysis (Franz Baader, Ralf Molitor) - On the Contextual Logic of Ordinal Data (Silke Pollandt, Rudolf Wille) - Boolean Concept Logic (Rudolf Wille) - Lattices of Triadic Concept Graphs (Bernd Groh, Rudolf Wille) - Formalizing Hypotheses with Concepts (Bernhard Ganter, Sergei 0. Kuznetsov) - Generalized Formal Concept Analysis (Laurent Chaudron, Nicolas Maille) - A Logical Generalization of Formal Concept Analysis (Sébastien Ferré, Olivier Ridoux) - On the Treatment of Incomplete Knowledge in Formal Concept Analysis (Peter Burmeister, Richard Holzer) - Conceptual Structures in Practice: Logic-Based Networks: Concept Graphs and Conceptual Structures (Peter W. Eklund) - Conceptual Knowledge Discovery and Data Analysis (Joachim Hereth, Gerd Stumme, Rudolf Wille, Uta Wille) - CEM - A Conceptual Email Manager (Richard Cole, Gerd Stumme) - A Contextual-Logic Extension of TOSCANA (Peter Eklund, Bernd Groh, Gerd Stumme, Rudolf Wille) - A Conceptual Graph Model for W3C Resource Description Framework (Olivier Corby, Rose Dieng, Cédric Hébert) - Computational Aspects of Conceptual Structures: Computing with Conceptual Structures (Bernhard Ganter) - Symmetry and the Computation of Conceptual Structures (Robert Levinson) An Introduction to SNePS 3 (Stuart C. Shapiro) - Composition Norm Dynamics Calculation with Conceptual Graphs (Aldo de Moor) - From PROLOG++ to PROLOG+CG: A CG Object-Oriented Logic Programming Language (Adil Kabbaj, Martin Janta-Polczynski) - A Cost-Bounded Algorithm to Control Events Generalization (Gaël de Chalendar, Brigitte Grau, Olivier Ferret)
  4. Gonzalez, L.: What is FRBR? (2005) 0.00
    0.0016317749 = product of:
      0.004895325 = sum of:
        0.004895325 = product of:
          0.0146859735 = sum of:
            0.0146859735 = weight(_text_:online in 3401) [ClassicSimilarity], result of:
              0.0146859735 = score(doc=3401,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.09484067 = fieldWeight in 3401, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Catalogers, catalog managers, and others in library technical services have become increasingly interested in, worried over, and excited about FRBR (the acronym for Functional Requirements of Bibliographic Records). Staff outside of the management of the library's bibliographic database may wonder what the fuss is about (FERBER? FURBUR?), assuming that FRBR is just another addition to the stable of acronyms that catalogers bandy about, a mate or sibling to MARC and AACR2. FRBR, however, has the potential to inspire dramatic changes in library catalogs, and those changes will greatly impact how reference and resource sharing staff and patrons use this core tool. FRBR is a conceptual model for how bibliographic databases might be structured, considering what functions bibliographic records should fulfill in an era when card catalogs are databases with unique possibilities. In some ways FRBR clarifies certain cataloging practices that librarians have been using for over 160 years, since Sir Anthony Panizzi, Keeper of the Printed Books at the British Museum, introduced a set of 91 rules to catalog the print collections of the museum. Sir Anthony believed that patrons should be able to find a particular work by looking in the catalog, that all of an author's works should be retrievable, and that all editions of a work should be assembled together. In other ways, FRBR extends upon past practice to take advantage fully of the capabilities of digital technology to associate bibliographic records in ways a card catalog cannot. FRBR was prepared by a study group assembled by IFLA (International Federation of Library Associations and Institutions) that included staff of the Library of Congress (LC). The final report of the group, "Functional Requirements for Bibliographic Records," is available online. The group began by asking how an online library catalog might better meet users' needs to find, identify, select, and obtain the resources they want.
  5. Janes, J.: Introduction to reference work in the digital age. (2003) 0.00
    0.0016317749 = product of:
      0.004895325 = sum of:
        0.004895325 = product of:
          0.0146859735 = sum of:
            0.0146859735 = weight(_text_:online in 3993) [ClassicSimilarity], result of:
              0.0146859735 = score(doc=3993,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.09484067 = fieldWeight in 3993, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3993)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    The discussion of modes for digital reference world be incomplete without focusing an the technologies that support this activity. E-mail, Web forms, chat, instant messaging, and videoconferencing, as well as the call center based software, are now being adapted for use in libraries. The book discusses the technologies currently available and an the horizon to support digital reference services. While these sections of the book may not age well, they will provide us with a historical glimpse of the nascent development of such tools and how they were used at the beginning of the digital reference age. True to the emphasis an decision-making, the chapter an technology includes a list of functions that reference librarians world want in software to support digital reference. While no current applications have all of these features, this list provides librarians with some ideas concerning possible features that can be prioritized to aid in a selection process. Despite the emphasis an technology, Janes contextualizes this discussion with several significant issues relating to its implementation. These include everything from infrastructure, collaborative service standards, service design, user authentication, and user expectations. The sections an collaborative service models and service design are particularly interesting since they are both in their infancy. Readers wanting an answer or the "best" design of either institutional or collaborative digital reference service will be disappointed. However, raising these considerations is important and Janes points out how crucial these issues will be as online reference service matures. User authentication in the context of reference service is especially tricky since tensions can emerge between license agreements and the range of people who may or may not be covered by these contracts querying reference librarians. Finally, no discussion of digital reference is complete without a discussion of the possibility of 24/7 reference service and the ensuing user expectations. While Janes has no answers to the dilemmas these raise, he does alert libraries providing digital reference services to some of the realities. One is that libraries will get a broader range of questions, which could impact staff time, collection development to support these questions, and necessitate either a confirmation of priorities or a reprioritization of activities. Another reality is that the users of digital reference services may never have partaken of their services before. In fact, for libraries funded to serve a particular constituency (public libraries, academic libraries) this influx of users raises questions about levels of service, funding, and policy. Finally, in keeping with the underlying theme of values that pervades the book, Janes points out the deeper issues related to technology such as increasing ability to track users an the web. While he realizes that anonymous information about those who ask reference questions world provide reference librarians with a great deal of information to hone services and better serve constituencies, he is well aware of the dangers involved in collectiog patron information in electronic form.
    Given that the Web is constantly changing, Janes turns bis focus to the future of digital reference. Topics include changes in reference practice, restructuring resource utilization, and the evolving reference interview. These are crucial dimensions of digital reference practice that require attention. The most intriguing of these is the changing nature of the interaction with the patron. The majority of digital reference takes place without physical, aural, or visual eines to gauge understanding or to sense conclusion of the interaction. While Janes provides some guidelines for both digital reference interviewing and Web forms, he honestly admits that reference interviewing in the technologically mediated environment requires additional study in both the asynchronous and particularly synchronous communication modalities. As previously noted, Janes is as concerned about developing the infrastructure for digital reference, as he is about the service itself. By infrastructure, Janes means not only the technological infrastructure, but also the people and the institution. In discussing the need for institutionalization of digital reference, he discusses (re)training reference staff, staffing models, and institutionalizing the service. The section an institutionalizing the service itself is particularly strong and presents a 10-step planning process for libraries to follow as they consider developing online services. The book ends with some final thoughts and exhortations to the readers. The author, as in the rest of the book, encourages experimentation, innovation, and risk taking. These are not characteristics that are automatically associated with librarians, but these qualities are not alien to readers either. The theme of planning and the value of connecting people with information pervade this chapter. In this closing, Janes subtly tells readers that his guidelines and proposals are just that-there is no magic bullet here. But he does argue that there has been good work done and some models that can be adopted, adapted, and improved (and then hopefully shared with others). In the end, Janes leaves readers with a feeling that there is a place for library reference service in the digital realm. Furthermore, he is convinced that the knowledge and skills of reference librarians are translatable into this arena. By focusing an the institutionalization of digital reference services, Janes is trying to get libraries to better position themselves in the virtual world, beside the commercial services and the plethora of Web-based information competing for the patrons' attention."
  6. Theorizing digital cultural heritage : a critical discourse (2005) 0.00
    0.0016317749 = product of:
      0.004895325 = sum of:
        0.004895325 = product of:
          0.0146859735 = sum of:
            0.0146859735 = weight(_text_:online in 1929) [ClassicSimilarity], result of:
              0.0146859735 = score(doc=1929,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.09484067 = fieldWeight in 1929, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1929)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Enthält die Beiträge: Rise and fall of the post-photographic museum : technology and the transformation of art / Peter Walsh -- Materiality of virtual technologies : a new approach to thinking about the impact of multimedia in museums / Andrea Witcomb -- Beyond the cult of the replicant-- museums and historical digital objects : traditional concerns, new discourses / Fiona Cameron -- Te Ahu Hiko : cultural heritage and indigenous objects, people, and environments / Deidre Brown -- Redefining digital art : disrupting borders / Beryl Graham -- Online activity and offlline community : cultural institutions and new media art / Sarah Cook -- Crisis of authority : new lamps for old / Susan Hazan -- Digital cultural communication : audience and remediation / Angelina Russo and Jerry Watkins -- Digital knowledgescapes : cultural, theoretical, practical and usage issues facing museum collection databases in a digital epoch / Fiona Cameron and Helena Robinson -- Art is redeemed, mystery is gone : the documentation of contemporary art / Harald Kraemer -- Cultural information standards-- political territory and rich rewards / Ingrid Mason -- Finding a future for digital cultural heritage resources using contextual information frameworks / Gavan McCarthy -- Engaged dialogism in virtual space : an exploration of research strategies for virtual museums / Suhas Deshpande, Kati Geber, and Corey Timpson -- Localized, personalized, and constructivist : a space for online museum learning / Ross Parry and Nadia Arbach -- Speaking in Rama : panoramic vision in cultural heritage visualization / Sarah Kenderdine -- Dialing up the past / Erik Champion and Bharat Dave -- Morphology of space in virtual heritage / Bernadette Flynn -- Toward tangible virtualities : tangialities / Slavko Milekic -- Ecological cybernetics, virtual reality, and virtual heritage / Maurizio Forte -- Geo-storytelling : a living archive of spatial culture / Scot T. Refsland, Marc Tuters, and Jim Cooley -- Urban heritage representations in hyperdocuments / Rodrigo Paraizo and José Ripper Kós -- Automatic archaeology : bridging the gap between virtual reality, artificial intelligence, and archaeology / Juan Antonio Barceló.
  7. Exploring artificial intelligence in the new millennium (2003) 0.00
    0.0016210539 = product of:
      0.0048631616 = sum of:
        0.0048631616 = product of:
          0.014589484 = sum of:
            0.014589484 = weight(_text_:retrieval in 2099) [ClassicSimilarity], result of:
              0.014589484 = score(doc=2099,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.09452859 = fieldWeight in 2099, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    In Chapter 7, Jeff Rickel and W. Lewis Johnson have created a virtual environment, with virtual humans for team training. The system is designed to allow a digital character to replace team members that may not be present. The system is also designed to allow students to acquire skills to occupy a designated role and help coordinate their activities with their teammates. The paper presents a complex concept in a very manageable fashion. In Chapter 8, Jonathan Yedidia et al. study the initial issues that make up reasoning under uncertainty. This type of reasoning, in which the system takes in facts about a patient's condition and makes predictions about the patient's future condition, is a key issue being looked at by many medical expert system developers. Their research is based an a new form of belief propagation, which is derived from generalized existing probabilistic inference methods that are widely used in AI and numerous other areas such as statistical physics. The ninth chapter, by David McAllester and Robert E. Schapire, looks at the basic problem of learning a language model. This is something that would not be challenging for most people, but can be quite arduous for a machine. The research focuses an a new technique called leave-one-out estimator that was used to investigate why statistical language models have had such success in this area of research. In Chapter 10, Peter Baumgartner looks at simplified theorem proving techniques, which have been applied very effectively in propositional logie, to first-ordered case. The author demonstrates how his new technique surpasses existing techniques in this area of AI research. The chapter simplifies a complex subject area, so that almost any reader with a basic Background in AI could understand the theorem proving. In Chapter 11, David Cohen et al. analyze complexity issues in constraint satisfaction, which is a common problem-solving paradigm. The authors lay out how tractable classes of constraint solvers create new classes that are tractable and more expressive than previous classes. This is not a chapter for an inexperienced student or researcher in AI. In Chapter 12, Jaana Kekalaine and Kalervo Jarvelin examine the question of finding the most important documents for any given query in text-based retrieval. The authors put forth two new measures of relevante and attempt to show how expanding user queries based an facets about the domain benefit retrieval. This is a great interdisciplinary chapter for readers who do not have a strong AI Background but would like to gain some insights into practical AI research. In Chapter 13, Tony Fountain et al. used machine learning techniques to help lower the tost of functional tests for ICs (integrated circuits) during the manufacturing process. The researchers used a probabilistic model of failure patterns extracted from existing data, which allowed generating of a decision-theoretic policy that is used to guide and optimize the testing of ICs. This is another great interdisciplinary chapter for a reader interested in an actual physical example of an AI system, but this chapter would require some AI knowledge.
  8. Current theory in library and information science (2002) 0.00
    0.0016210539 = product of:
      0.0048631616 = sum of:
        0.0048631616 = product of:
          0.014589484 = sum of:
            0.014589484 = weight(_text_:retrieval in 822) [ClassicSimilarity], result of:
              0.014589484 = score(doc=822,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.09452859 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    However, for well over a century, major libraries in developed nations have been engaging in sophisticated measure of their operations, and thoughtful scholars have been involved along the way; if no "unified theory" has emerged thus far, why would it happen in the near future? What if "libraries" are a historicallydetermined conglomeration of distinct functions, some of which are much less important than others? It is telling that McGrath cites as many studies an brittle paper as he does investigations of reference services among his constellation of measurable services, even while acknowledging that the latter (as an aspect of "circulation") is more "essential." If one were to include in a unified theory similar phenomena outside of libraries-e.g., what happens in bookstores and WWW searches-it can be seen how difficult a coordinated explanation might become. Ultimately the value of McGrath's chapter is not in convincing the reader that a unified theory might emerge, but rather in highlighting the best in recent studies that examine library operations, identifying robust conclusions, and arguing for the necessity of clarifying and coordinating common variables and units of analysis. McGrath's article is one that would be useful for a general course in LIS methodology, and certainly for more specific lectures an the evaluation of libraries. Fra going to focus most of my comments an the remaining articles about theory, rather than the others that offer empirical results about the growth or quality of literature. I'll describe the latter only briefly. The best way to approach this issue is by first reading McKechnie and Pettigrew's thorough survey of the "Use of Theory in LIS research." Earlier results of their extensive content analysis of 1, 160 LIS articles have been published in other journals before, but is especially pertinent here. These authors find that only a third of LIS literature makes overt reference to theory, and that both usage and type of theory are correlated with the specific domain of the research (e.g., historical treatments versus user studies versus information retrieval). Lynne McKechnie and Karen Pettigrew identify four general sources of theory: LIS, the Humanities, Social Sciences and Sciences. This approach makes it obvious that the predominant source of theory is the social sciences (45%), followed by LIS (30%), the sciences (19%) and the humanities (5%) - despite a predominance (almost 60%) of articles with science-related content. The authors discuss interdisciplinarity at some length, noting the great many non-LIS authors and theories which appear in the LIS literature, and the tendency for native LIS theories to go uncited outside of the discipline. Two other articles emphasize the ways in which theory has evolved. The more general of three two is Jack Glazier and Robert Grover's update of their classic 1986 Taxonomy of Theory in LIS. This article describes an elaborated version, called the "Circuits of Theory," offering definitions of a hierarchy of terms ranging from "world view" through "paradigm," "grand theory" and (ultimately) "symbols." Glazier & Grover's one-paragraph example of how theory was applied in their study of city managers is much too brief and is at odds with the emphasis an quantitative indicators of literature found in the rest of the volume. The second article about the evolution of theory, Richard Smiraglia's "The progress of theory in knowledge organization," restricts itself to the history of thinking about cataloging and indexing. Smiraglia traces the development of theory from a pragmatic concern with "what works," to a reliance an empirical tests, to an emerging flirtation with historicist approaches to knowledge.
    There is only one article in the issue that claims to offer a theory of the scope that discussed by McGrath, and I am sorry that it appears in this issue. Bor-Sheng Tsai's "Theory of Information Genetics" is an almost incomprehensible combination of four different "models" wich names like "Möbius Twist" and "Clipping-Jointing." Tsai starts by posing the question "What is it that makes the `UNIVERSAL' information generating, representation, and transfer happen?" From this ungrammatical beginning, things get rapidly worse. Tsai makes side trips into the history of defining information, offers three-dimensional plots of citation data, a formula for "bonding relationships," hypothetical data an food consumption, sample pages from a web-based "experts directory" and dozens of citations from works which are peripheral to the discussion. The various sections of the article seem to have little to do with one another. I can't believe that the University of Illinois would publish something so poorly-edited. Now I will turn to the dominant, "bibliometric" articles in this issue, in order of their appearance: Judit Bar-Ilan and Bluma Peritz write about "Informetric Theories and Methods for Exploring the Internet." Theirs is a survey of research an patterns of electronic publication, including different ways of sampling, collecting and analyzing data an the Web. Their contribution to the "theory" theme lies in noting that some existing bibliometric laws apply to the Web. William Hood and Concepción Wilson's article, "Solving Problems ... Using Fuzzy Set Theory," demonstrates the widespread applicability of this mathematical tool for library-related problems, such as making decisions about the binding of documents, or improving document retrieval. Ronald Rosseau's piece an "Journal Evaluation" discusses the strength and weaknesses of various indicators for determining impact factors and rankings for journals. His is an exceptionally well-written article that has everything to do with measurement but almost nothing to do with theory, to my way of thinking. "The Matthew Effect for Countries" is the topic of Manfred Bonitz's paper an citations to scientific publications, analyzed by nation of origin. His research indicates that publications from certain countries-such as Switzerland, Denmark, the USA and the UK-receive more than the expected number of citations; correspondingly, some rather large countries like China receive much fewer than might be expected. Bonitz provides an extensive discussion of how the "MEC" measure came about, and what it ments-relating it to efficiency in scientific research. A bonus is his detour into the origins of the Matthew Effect in the Bible, and the subsequent popularization of the name by the sociologist Robert Merton. Wolfgang Glänzel's "Coauthorship patterns and trends in the sciences (1980-1998)" is, as the title implies, another citation analysis. He compares the number of authors an papers in three fields-Biomedical research, Chemistry and Mathematics - at sixyear intervals. Among other conclusions, Glänzel notes that the percentage of publications with four or more authors has been growing in all three fields, and that multiauthored papers are more likely to be cited.
  9. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0015361942 = product of:
      0.0046085827 = sum of:
        0.0046085827 = product of:
          0.013825748 = sum of:
            0.013825748 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.013825748 = score(doc=1858,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05
  10. Henderson, L.; Tallman, J.I.: Stimulated recall and mental models : tools for teaching and learning computer information literacy (2006) 0.00
    0.0014422988 = product of:
      0.0043268963 = sum of:
        0.0043268963 = product of:
          0.012980688 = sum of:
            0.012980688 = weight(_text_:online in 1717) [ClassicSimilarity], result of:
              0.012980688 = score(doc=1717,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08382809 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1717)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.456-457 (D. Cook): "In February 2006, the Educational Testing Service (ETS) announced the release of its brand new core academic assessment of its Information and Communication Technology (ICT) Literacy Assessment. The core assessment is designed to assess the information literacy of high school students transitioning to higher education. Many of us already know ETS for some of its other assessment tools like the SAT and GRE. But ETS's latest test comes on the heels of its 2005 release of an advanced level of its ICT Literacy Assessment for college students progressing to their junior and senior year of undergraduate studies. Neither test, ETS insists, is designed to be an entrance examination. Rather, they are packaged and promoted as diagnostic assessments. We are in the grips of the Information Age where information literacy is a prized skill. Knowledge is power. However, information literacy is not merely creating flawless documents or slick PowerPoint presentations on a home PC. It is more than being able to send photos and text messages via cell phone. Instead, information literacy is gauged by one's ability to skillfully seek, access, and retrieve valid information from credible and reliable sources and using that information appropriately. It involves strong online search strategies and advanced critical thinking skills. And, although it is not clear whether they seized the opportunity or inherited it by default, librarians are in the vanguard of teaching information literacy to the next generation of would-be power brokers.
  11. Willinsky, J.: ¬The access principle : the case for open access to research and scholarship (2006) 0.00
    0.0014422988 = product of:
      0.0043268963 = sum of:
        0.0043268963 = product of:
          0.012980688 = sum of:
            0.012980688 = weight(_text_:online in 298) [ClassicSimilarity], result of:
              0.012980688 = score(doc=298,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08382809 = fieldWeight in 298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=298)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    An argument for extending the circulation of knowledge with new publishing technologies considers scholarly, economic, philosophical, and practical issues. Questions about access to scholarship go back farther than recent debates over subscription prices, rights, and electronic archives suggest. The great libraries of the past - from the fabled collection at Alexandria to the early public libraries of nineteenth-century America - stood as arguments for increasing access. In The Access Principle, John Willinsky describes the latest chapter in this ongoing story - online open access publishing by scholarly journals - and makes a case for open access as a public good. A commitment to scholarly work, writes Willinsky, carries with it a responsibility to circulate that work as widely as possible: this is the access principle. In the digital age, that responsibility includes exploring new publishing technologies and economic models to improve access to scholarly work. Wide circulation adds value to published work; it is a significant aspect of its claim to be knowledge. The right to know and the right to be known are inextricably mixed. Open access, argues Willinsky, can benefit both a researcher-author working the best-equipped lab at a leading research university and a teacher struggling to find resources in an impoverished high school. Willinsky describes different types of access - the New England Journal of Medicine, for example, grants open access to issues six months after initial publication, and First Monday forgoes a print edition and makes its contents immediately accessible at no cost. He discusses the contradictions of copyright law, the reading of research, and the economic viability of open access. He also considers broader themes of public access to knowledge, human rights issues, lessons from publishing history, and "epistemological vanities." The debate over open access, writes Willinsky, raises crucial questions about the place of scholarly work in a larger world - and about the future of knowledge.
  12. Shaping the network society : the new role of civil society in cyberspace (2004) 0.00
    0.0014422988 = product of:
      0.0043268963 = sum of:
        0.0043268963 = product of:
          0.012980688 = sum of:
            0.012980688 = weight(_text_:online in 441) [ClassicSimilarity], result of:
              0.012980688 = score(doc=441,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08382809 = fieldWeight in 441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=441)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Geert Lovnik and Patrice Riemens explore the digital culture of Amsterdam to show how. despite the techno-social idealism of the early years of the public sphere Digital City project. the culture ran into problems. Susan Finquelievich studies the practices of civic networks in Buenos Aires and Montevideo to demonstrate how local sociohistorical conditions have shaped the technology's development. Veran Matic focuses on the role of media in defending human rights in a hostile environment (former Yugoslavia). Media, she notes, need not necessarily he (or become) a tool of fascist forces, but can he used to generate resistance and to forge a democratic public sphere. Scott Robinson looks at Mexico's telecenter movement to argue that these cybercafes are likely to become an institution for the new Second World of immigrants and refugees. through socially relevant functions. Fiorella de Cindio looks at one of the worlds most significant community networks that of Milan. She demonstrates how local citizens have used information and communication technologies to build a viable. and potentially empowering, participatory public sphere in academia, computer-supported cooperative work, participatory design, and civil engagement (what she calls genes). The third section, -'Building a New Public Sphere in Cyberspace," pros- ides a series of suggestions and frameworks for the spacing of public space through information and communications technologies. Craig Calhoun argues that a global public sphere is indispensable to the formation of a global democracy. Public discourse can still fight commercialism and violence to form a more democratic civil society. Howard Rheingold the great enthusiast of virtual worlds-performs an intricate mix of autobiographical reflection and speculation when he writes of the role of the new technologies. Rheingold, despite his fetishistic enthusiasm for technology and online community, is cautious when it comes to crucial issues such as the creation of democratic public spheres, arguing that we require a great deal more serious thinking on matters of ownership and control (over the technology). He argues that if citizens lose our freedom to communicate, then even the powerful potential of the Net to create electronic democracy will be fatal illusion (p. 275). Nancy Kranich turns to public libraries as the site of potential democratic society, arguing that as sites of informationdissemination. public libraries can become a commons for the exchange of ideas and social interaction. David Silver compares the Blacksburg Electronic Village (BEV) to the Seattle Community Network the former funded by corporations and the state, the latter built essentially out of and through volunteer efforts. Silver, in characteristic style. looks at the historical archaeologies of the networks to show how sociohistorical contexts shape certain kinds of public spheres (and public discourse). going on to ask how, these networks can overcome these contexts to achieve their original goals. He warns that we need to uncover the histories of such networks because they inform the kinds of interactions of communities that exist within them. Douglas Morris analyzes the Independent Media Centre (IMO) Movement of antiglobalization activists to argue that alternative viewpoints and ideological differences can he aired, debated, and appropriated through the new technologies in order to fight corporate and commercial forces.
  13. Albrechtsen, H.: ISKO news (2006) 0.00
    0.0014422988 = product of:
      0.0043268963 = sum of:
        0.0043268963 = product of:
          0.012980688 = sum of:
            0.012980688 = weight(_text_:online in 690) [ClassicSimilarity], result of:
              0.012980688 = score(doc=690,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08382809 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    German ISKO The German ISKO held its 10th conference in Vienna from July 3rd to 5th 2006-just before the International ISKO Conference. Main themes were Compatibility and Heterogeneity, Ethics and Future of Knowledge Organization. The program contained some English lessons and a tutorial on Ontologies. The German proceedings (Fortschritte in der Wissensorganisation 10) will be published in 2007 by Ergon together with some remaining papers of the international conference. The next German conference will be held in November 2007 in Konstanz with a focus on sustainability. Jörn Sieglerschmidt will be the local organizer 2007 as well as the new German ISKO treasurer. - H. Peter OHLY Extensions and Corrections to the UDC, 28 (2006) The next issue of Extensions and Corrections (E&C), to be published by the end of 2006, will bring to the UDC community important revisions and additions to the schedule, notably an extensive revision of parts of the Area Table concerning some countries of east and southeast Asia and Africa, and the expansion of Class 2 for Islam, which provides a very rich structure and vocabulary for one of the main religions of the world, thus enhancing UDC in an important subject area of worldwide application. Through the contribution of VINITI's collaborators, it was also possible to advance revision work in the areas of Mathematics and Physics, also published in this vol ume. The ongoing work on a proposal for the revision of Class 61 Medicine continued to receive the expert attention of Professor Nancy Williamson, and this year a proposal for the digestive system is included in E&C. Finally, An Extended Table of Common Auxiliaries (Except Place), compiled by G. Robinson, is presented as a special Annex. Although this is not part of the UDC Master Reference File, it is intended as an authoritative source of all that is currently valid in Tables 1a to 1d and 1f to 1k, and including details from older editions, at the 'full' level, that have never been cancelled. This comes in the same line as the Extended Place Table (Table le) published last year, together with Extensions and Corrections 27 (2005). Additionally this issue will feature a set of articles of interest to classification experts and users. Topics include: an exploration in mapping the UDC to DDC, interfaces to classification and UDC application in online catalogs and information on a new editorial support system being developed for UDC.
  14. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    0.0014422988 = product of:
      0.0043268963 = sum of:
        0.0043268963 = product of:
          0.012980688 = sum of:
            0.012980688 = weight(_text_:online in 1875) [ClassicSimilarity], result of:
              0.012980688 = score(doc=1875,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08382809 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. "I consider the pixel data in images and video to be the dark matter of the Internet," said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. "We are now starting to illuminate it." Dr. Li and Mr. Karpathy published their research as a Stanford University technical report. The Google team published their paper on arXiv.org, an open source site hosted by Cornell University.
  15. a cataloger's primer : Metadata (2005) 0.00
    0.0014328228 = product of:
      0.004298468 = sum of:
        0.004298468 = product of:
          0.012895404 = sum of:
            0.012895404 = weight(_text_:retrieval in 133) [ClassicSimilarity], result of:
              0.012895404 = score(doc=133,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08355226 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=133)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    - Caplan, Priscilla. 2003. Metadata fundamentals for all librarians. Chicago: ALA Editions. - Gorman, G.E. and Daniel G. Dorner, eds. 2004. Metadata applications and management. International yearbook of library and information management 2003/2004. Lanham, Md.: Scarecrow Press. - Intner, Sheila S., Susan S. Lazinger and Jean Weihs. 2006. Metadata and its impact on libraries. Westport, Conn.: Libraries Unlimited. - Haynes, David. 2004. Metadata for information management and retrieval. London: Facet. - Hillmann, Diane I. and Elaine L. Westbrooks, eds. 2004. Metadata in practice. Chicago: American Library Association. Metadata: A Cataloger's Primer compares favorably with these texts, and like them has its own special focus and contribution to make to the introductorylevel literature on metadata. Although the focus, purpose, and nature of the contents are different, this volume bears a similarity to the Hillmann and Westbrooks text insofar as it consists of a collection of papers written by various authors tied together by a general, common theme. In conclusion, this volume makes a significant contribution to the handful of books that attempt to present introductory level information about metadata to catalog librarians and students. Although it does not serve fully satisfactorily as a stand-alone textbook for an LIS course nor as a single unified and comprehensive introduction for catalogers, it, like the others mentioned above, could serve as an excellent supplementary LIS course text, and it is highly worthwhile reading for working catalogers who want to learn more about metadata, as well as librarians and instructors already well-versed in metadata topics."
  16. ¬La interdisciplinariedad y la transdisciplinariedad en la organización del conocimiento científico : actas del VIII Congreso ISKO-España, León, 18, 19 y 20 de Abril de 2007 : Interdisciplinarity and transdisciplinarity in the organization of scientific knowledge (2007) 0.00
    0.0014328228 = product of:
      0.004298468 = sum of:
        0.004298468 = product of:
          0.012895404 = sum of:
            0.012895404 = weight(_text_:retrieval in 1150) [ClassicSimilarity], result of:
              0.012895404 = score(doc=1150,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08355226 = fieldWeight in 1150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1150)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Diez Diez, Á., Blanes Peiro, J.J., Rodríguez Sedano, F.J.: Diseño e implementación de un sistema de gestión de la actividad docente; Simon, J.: Probing concepts: knowledge and information as boundary objects in interdisciplinary discourse; Santiago Bufrem, L., Breda, S.M., Viana Sorribas, T.: The presence of logic in the domain of knowledge organization: interdisciplinary aspects of college curricula, Pluzhenskaia, M.. Research collaboration of Library and Information Science (LIS) schools# faculty members with LIS and non-LIS advanced degrees: multidisciplinary and interdisciplinary trends; Agrasso Neto, M., França de Abreu, A.: Modelo de servicio de referencia e información para portal de conocimiento de grupos de investigación; Ayuso García, M.D., Martínez Navarro, V.: Alfabetización informacional y servicios de referencia virtual; Barrionuevo Almuzara, L., Marsá Vila, M.: La biblioteca universitaria de Léon: pasos hacia la convergencia Europea; Lúcia Terra, A., Sá, S.: La recuperación de la información en la biblioteca escolar: la necesidad de competencias transdisciplinares; Vangari, V.M.: User-centred systems of information retrieval in the digital era; De Fátima Loureiro, M.: Information organization and visualization in cyberspace: interdisciplinary study based on concept maps
  17. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    0.0014328228 = product of:
      0.004298468 = sum of:
        0.004298468 = product of:
          0.012895404 = sum of:
            0.012895404 = weight(_text_:retrieval in 3370) [ClassicSimilarity], result of:
              0.012895404 = score(doc=3370,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08355226 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>
  18. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0014328228 = product of:
      0.004298468 = sum of:
        0.004298468 = product of:
          0.012895404 = sum of:
            0.012895404 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
              0.012895404 = score(doc=4232,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.08355226 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
  19. XML in libraries (2002) 0.00
    0.001153839 = product of:
      0.003461517 = sum of:
        0.003461517 = product of:
          0.01038455 = sum of:
            0.01038455 = weight(_text_:online in 3100) [ClassicSimilarity], result of:
              0.01038455 = score(doc=3100,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.067062475 = fieldWeight in 3100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
  20. Learning XML (2003) 0.00
    0.001153839 = product of:
      0.003461517 = sum of:
        0.003461517 = product of:
          0.01038455 = sum of:
            0.01038455 = weight(_text_:online in 3101) [ClassicSimilarity], result of:
              0.01038455 = score(doc=3101,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.067062475 = fieldWeight in 3101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.

Authors

Years

Languages

Types

  • a 8266
  • m 671
  • s 371
  • el 338
  • r 87
  • b 43
  • x 33
  • i 23
  • n 19
  • p 14
  • d 11
  • ? 4
  • h 2
  • pat 2
  • A 1
  • EL 1
  • l 1
  • More… Less…

Themes

Subjects

Classifications