Search (48 results, page 1 of 3)

  • × language_ss:"e"
  • × type_ss:"m"
  • × type_ss:"s"
  • × year_i:[2000 TO 2010}
  1. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.02
    0.017135125 = product of:
      0.0685405 = sum of:
        0.0685405 = product of:
          0.137081 = sum of:
            0.137081 = weight(_text_:software in 4319) [ClassicSimilarity], result of:
              0.137081 = score(doc=4319,freq=24.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.75917953 = fieldWeight in 4319, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4319)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
    LCSH
    Software Engineering/Programming and Operating Systems
    Software engineering
    RSWK
    Computerarchitektur / Software Engineering / Telekommunikation / Online-Publikation
    Subject
    Computerarchitektur / Software Engineering / Telekommunikation / Online-Publikation
    Software Engineering/Programming and Operating Systems
    Software engineering
  2. Software for Indexing (2003) 0.01
    0.01399077 = product of:
      0.05596308 = sum of:
        0.05596308 = product of:
          0.11192616 = sum of:
            0.11192616 = weight(_text_:software in 2294) [ClassicSimilarity], result of:
              0.11192616 = score(doc=2294,freq=64.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.6198675 = fieldWeight in 2294, product of:
                  8.0 = tf(freq=64.0), with freq of:
                    64.0 = termFreq=64.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.115-116 (C. Jacobs): "This collection of articles by indexing practitioners, software designers and vendors is divided into five sections: Dedicated Software, Embedded Software, Online and Web Indexing Software, Database and Image Software, and Voice-activated, Automatic, and Machine-aided Software. This diversity is its strength. Part 1 is introduced by two chapters an choosing dedicated software, highlighting the issues involved and providing tips an evaluating requirements. The second chapter includes a fourteen page chart that analyzes the attributes of Authex Plus, three versions of CINDEX 1.5, MACREX 7, two versions of SKY Index (5.1 and 6.0) and wINDEX. The lasting value in this chart is its utility in making the prospective user aware of the various attributes/capabilities that are possible and that should be considered. The following chapters consist of 16 testimonials for these software packages, completed by a final chapter an specialized/customized software. The point is made that if a particular software function could increase your efficiency, it can probably be created. The chapters in Part 2, Embedded Software, go into a great deal more detail about how the programs work, and are less reviews than illustrations of functionality. Perhaps this is because they are not really stand-alones, but are functions within, or add-ons used with larger word processing or publishing programs. The software considered are Microsoft Word, FrameMaker, PageMaker, IndexTension 3.1.5 that is used with QuarkXPress, and Index Tools Professional and IXgen that are used with FrameMaker. The advantages and disadvantages of embedded indexing are made very clear, but the actual illustrations are difficult to follow if one has not worked at all with embedded software. Nonetheless, the section is valuable as it highlights issues and provides pointers an solutions to embedded indexing problems.
    Part 3, Online and Web Indexing Software, opens with a chapter in which the functionalities of HTML/Prep, HTML Indexer, and RoboHELP HTML Edition are compared. The following three chapters look at them individually. This section helps clarify the basic types of non-database web indexing - that used for back-of-the-book style indexes, and that used for online help indexes. The first chapter of Part 4, Database and image software, begins with a good discussion of what database indexing is, but falls to carry through with any listing of general characteristics, problems and attributes that should be considered when choosing database indexing software. It does include the results of an informal survey an the Yahoogroups database indexing site, as well as three short Gase studies an database indexing projects. The survey provides interesting information about freelancing, but it is not very useful if you are trying to gather information about different software. For example, the most common type of software used by those surveyed turns out to be word-processing software. This seems an odd/awkward choice, and it would have been helpful to know how and why the non-specialized software is being used. The survey serves as a snapshot of a particular segment of database indexing practice, but is not helpful if you are thinking about purchasing, adapting, or commissioning software. The three case studies give an idea of the complexity of database indexing and there is a helpful bibliography.
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
  3. Information science in transition (2009) 0.01
    0.010849538 = product of:
      0.043398153 = sum of:
        0.043398153 = sum of:
          0.02798154 = weight(_text_:software in 634) [ClassicSimilarity], result of:
            0.02798154 = score(doc=634,freq=4.0), product of:
              0.18056466 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.045514934 = queryNorm
              0.15496688 = fieldWeight in 634, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
          0.015416614 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
            0.015416614 = score(doc=634,freq=2.0), product of:
              0.15938555 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045514934 = queryNorm
              0.09672529 = fieldWeight in 634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=634)
      0.25 = coord(1/4)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Content
    Inhalt: Fifty years of UK research in information science - Jack Meadows / Smoother pebbles and the shoulders of giants: the developing foundations of information science - David Bawden / The last 50 years of knowledge organization: a journey through my personal archives - Stella G. Dextre Clarke / On the history of evaluation in IR - Stephen Robertson / The information user: past, present and future - Tom Wilson / The sociological turn in information science - Blaise Cronin / From chemical documentation to chemoinformatics: 50 years of chemical information science - Peter Willett / Health informatics: current issues and challenges - Peter A. Bath / Social informatics and sociotechnical research - a view from the UK - Elisabeth Davenport / The evolution of visual information retrieval - Peter Enser / Information policies: yesterday, today, tomorrow - Elizabeth Orna / The disparity in professional qualifications and progress in information handling: a European perspective - Barry Mahon / Electronic scholarly publishing and Open Access - Charles Oppenheim / Social software: fun and games, or business tools ? - Wendy A. Warr / Bibliometrics to webometrics - Mike Thelwall / How I learned to love the Brits - Eugene Garfield
    Date
    22. 2.2013 11:35:35
  4. International yearbook of library and information management : 2001/2002 information services in an electronic environment (2001) 0.01
    0.01079163 = product of:
      0.04316652 = sum of:
        0.04316652 = product of:
          0.08633304 = sum of:
            0.08633304 = weight(_text_:22 in 1381) [ClassicSimilarity], result of:
              0.08633304 = score(doc=1381,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.5416616 = fieldWeight in 1381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1381)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    25. 3.2003 13:22:23
  5. Between data science and applied data analysis : Proceedings of the 26th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Mannheim, July 22-24, 2002 (2003) 0.01
    0.0092499675 = product of:
      0.03699987 = sum of:
        0.03699987 = product of:
          0.07399974 = sum of:
            0.07399974 = weight(_text_:22 in 4606) [ClassicSimilarity], result of:
              0.07399974 = score(doc=4606,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.46428138 = fieldWeight in 4606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4606)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  6. Survey of text mining : clustering, classification, and retrieval (2004) 0.01
    0.008567562 = product of:
      0.03427025 = sum of:
        0.03427025 = product of:
          0.0685405 = sum of:
            0.0685405 = weight(_text_:software in 804) [ClassicSimilarity], result of:
              0.0685405 = score(doc=804,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.37958977 = fieldWeight in 804, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
    Classification
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
    RVK
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
  7. ¬The Semantic Web : research and applications ; second European Semantic WebConference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005 ; proceedings (2005) 0.01
    0.008394462 = product of:
      0.03357785 = sum of:
        0.03357785 = product of:
          0.0671557 = sum of:
            0.0671557 = weight(_text_:software in 439) [ClassicSimilarity], result of:
              0.0671557 = score(doc=439,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.3719205 = fieldWeight in 439, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=439)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    LCSH
    Software engineering
    Subject
    Software engineering
  8. Creating Web-accessible databases : case studies for libraries, museums, and other nonprofits (2001) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 4806) [ClassicSimilarity], result of:
              0.061666455 = score(doc=4806,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 4806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4806)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 12:21:28
  9. Seminario FRBR : Functional Requirements for Bibliographic Records: reguisiti funzionali per record bibliografici, Florence, 27-28 January 2000, Proceedings (2000) 0.01
    0.007708307 = product of:
      0.030833228 = sum of:
        0.030833228 = product of:
          0.061666455 = sum of:
            0.061666455 = weight(_text_:22 in 3948) [ClassicSimilarity], result of:
              0.061666455 = score(doc=3948,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.38690117 = fieldWeight in 3948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3948)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    29. 8.2005 12:54:22
  10. Computational information retrieval (2001) 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 4167) [ClassicSimilarity], result of:
              0.047486246 = score(doc=4167,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 4167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4167)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This volume contains selected papers that focus on the use of linear algebra, computational statistics, and computer science in the development of algorithms and software systems for text retrieval. Experts in information modeling and retrieval share their perspectives on the design of scalable but precise text retrieval systems, revealing many of the challenges and obstacles that mathematical and statistical models must overcome to be viable for automated text processing. This very useful proceedings is an excellent companion for courses in information retrieval, applied linear algebra, and applied statistics. Computational Information Retrieval provides background material on vector space models for text retrieval that applied mathematicians, statisticians, and computer scientists may not be familiar with. For graduate students in these areas, several research questions in information modeling are exposed. In addition, several case studies concerning the efficacy of the popular Latent Semantic Analysis (or Indexing) approach are provided.
  11. Semantic Web services challenge : results from the first year (2009) 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 2479) [ClassicSimilarity], result of:
              0.047486246 = score(doc=2479,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 2479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2479)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Service-Oriented Computing is one of the most promising software engineering trends for future distributed systems. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. Yet a common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their abilities and shortcomings, is still missing. "Semantic Web Services Challenge" is an edited volume that develops this common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. "Semantic Web Services Challenge" is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
  12. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.00
    0.0048967693 = product of:
      0.019587077 = sum of:
        0.019587077 = product of:
          0.039174154 = sum of:
            0.039174154 = weight(_text_:software in 5089) [ClassicSimilarity], result of:
              0.039174154 = score(doc=5089,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.21695362 = fieldWeight in 5089, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  13. Understanding knowledge as a commons : from theory to practice (2007) 0.00
    0.0039571873 = product of:
      0.01582875 = sum of:
        0.01582875 = product of:
          0.0316575 = sum of:
            0.0316575 = weight(_text_:software in 1362) [ClassicSimilarity], result of:
              0.0316575 = score(doc=1362,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.17532499 = fieldWeight in 1362, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1362)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Inhalt: Introduction : an overview of the knowledge commons / Charlotte Hess and Elinor Ostrom The growth of the commons paradigm / David Bollier A framework for analyzing the knowledge commons / Elinor Ostrom and Charlotte Hess Countering enclosure : reclaiming the knowledge commons / Nancy Kranich Mertonianism unbound? : imagining free, decentralized access to most cultural and scientific material / James Boyle Preserving the knowledge commons / Donald J. Waters Creating an intellectual commons through open access / Peter Suber How to build a commons : is intellectual property constrictive, facilitating, or irrelevant? / Shubha Ghosh Collective action, civic engagement, and the knowledge commons / Peter Levine Free/open-source software as a framework for establishing commons in science / Charles M. Schweik Scholarly communication and libraries unbound : the opportunity of the commons / Wendy Pradt Lougee EconPort : creating and maintaining a knowledge commons / James C. Cox and J. Todd Swarthout
  14. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.00
    0.0038541534 = product of:
      0.015416614 = sum of:
        0.015416614 = product of:
          0.030833228 = sum of:
            0.030833228 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.030833228 = score(doc=4900,freq=2.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  15. New directions in cognitive information retrieval (2005) 0.00
    0.0034976925 = product of:
      0.01399077 = sum of:
        0.01399077 = product of:
          0.02798154 = sum of:
            0.02798154 = weight(_text_:software in 338) [ClassicSimilarity], result of:
              0.02798154 = score(doc=338,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15496688 = fieldWeight in 338, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=338)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Classification
    ST 270 [Informatik # Monographien # Software und -entwicklung # Datenbanken, Datenbanksysteme, Data base management, Informationssysteme]
    RVK
    ST 270 [Informatik # Monographien # Software und -entwicklung # Datenbanken, Datenbanksysteme, Data base management, Informationssysteme]
  16. Culture and identity in knowledge organization : Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada (2008) 0.00
    0.0034976925 = product of:
      0.01399077 = sum of:
        0.01399077 = product of:
          0.02798154 = sum of:
            0.02798154 = weight(_text_:software in 2494) [ClassicSimilarity], result of:
              0.02798154 = score(doc=2494,freq=4.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15496688 = fieldWeight in 2494, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    KNOWLEDGE ORGANIZATION FOR INFORMATION MANAGEMENT AND RETRIEVAL Sabine Mas, L'Hedi Zäher and Manuel Zacklad. Design and Evaluation of Multi-viewed Knowledge System for Administrative Electronic Document Organization. - Xu Chen. The Influence of Existing Consistency Measures on the Relationship Between Indexing Consistency and Exhaustivity. - Michael Buckland and Ryan Shaw. 4W Vocabulary Mapping Across Diverse Reference Genres. - Abdus Sattar Chaudhry and Christopher S. G. Khoo. A Survey of the Top-level Categories in the Structure of Corporate Websites. - Nicolas L. George, Elin K. Jacob, Lijiang Guo, Lala Hajibayova and M Yasser Chuttur. A Case Study of Tagging Patteras in del.icio.us. - Kwan Yi and Lois Mai Chan. A Visualization Software Tool for Library of Congress Subject Headings. - Gercina Angela Borem Oliveira Lima. Hypertext Model - HTXM: A Model for Hypertext Organization of Documents. - Ali Shiri and Thane Chambers. Information Retrieval from Digital Libraries: Assessing the Potential Utility of Thesauri in Supporting Users' Search Behaviour in an Interdisciplinary Domain. - Verönica Vargas and Catalina Naumis. Water-related Language Analysis: The Need for a Thesaurus of Mexican Terminology. - Amanda Hill. What's in a Name?: Prototyping a Name Authority Service for UK Repositories. - Rick Szostak and Claudio Gnoli. Classifying by Phenomena, Theories and Methods: Examples with Focused Social Science Theories.
    EPISTEMOLOGICAL FOUNDATIONS OF KNOWLEDGE ORGANIZATION H. Peter Ohly. Knowledge Organization Pro and Retrospective. Judith Simon. Knowledge and Trust in Epistemology and Social Software/Knowledge Technologies. - D. Grant Campbell. Derrida, Logocentrism, and the Concept of Warrant on the Semantic Web. - Jian Qin. Controlled Semantics Versus Social Semantics: An Epistemological Analysis. - Hope A. Olson. Wind and Rain and Dark of Night: Classification in Scientific Discourse Communities. - Thomas M. Dousa. Empirical Observation, Rational Structures, and Pragmatist Aims: Epistemology and Method in Julius Otto Kaiser's Theory of Systematic Indexing. - Richard P. Smiraglia. Noesis: Perception and Every Day Classification. Birger Hjorland. Deliberate Bias in Knowledge Organization? Joseph T. Tennis and Elin K. Jacob. Toward a Theory of Structure in Information Organization Frameworks. - Jack Andersen. Knowledge Organization as a Cultural Form: From Knowledge Organization to Knowledge Design. - Hur-Li Lee. Origins of the Main Classes in the First Chinese Bibliographie Classification. NON-TEXTUAL MATERIALS Abby Goodrum, Ellen Hibbard, Deborah Fels and Kathryn Woodcock. The Creation of Keysigns American Sign Language Metadata. - Ulrika Kjellman. Visual Knowledge Organization: Towards an International Standard or a Local Institutional Practice?
  17. Social capital and information technology (2004) 0.00
    0.0034625386 = product of:
      0.013850154 = sum of:
        0.013850154 = product of:
          0.027700309 = sum of:
            0.027700309 = weight(_text_:software in 5055) [ClassicSimilarity], result of:
              0.027700309 = score(doc=5055,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15340936 = fieldWeight in 5055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5055)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 57(2006) no.5, S.723-724 (P. Galloway): "This collection consists of 14 chapters that bring together the two universes of discourse named in the title. Social Capital and Information Technology, under the editorship of a sociologist (Marleen Huysman) and a computer scientist (Volker Wulf) who had both begun to see the importance of social ties to the success of knowledge management/ knowledge sharing systems when they met and shared their interests. Its aim is chiefly to introduce the concept of social capital to information scientists and to demonstrate through a series of case studies how it can serve to explain the success or failure of information and communication technology systems, and even to assist in the building or improvement of such systems. Case studies range across many fields: KarEllen Bear Dog breeders' databases, multiple-sport athletes' newsgroups. a network supporting Iranian NGOs, B2B software for geographical business clusters, and after-.school computer labs for children. Of the papers gathered here most were presented at an Amsterdam workshop in 2002 focused on knowledge management and social capital, whereas a few others, concentrating more directly on societal issues, were invited by the editors to leaven the mix. The result is a readable collection that marks a promising hybrid direction in information research, still characterized by what the Editors term an "absolute lack of closure." The influence of knowledge management and informal learning threads is dominant, because the unit of analysis in all the studies is a definable user community. Examples all assume networked environ-ments and computer-mediated communication. though they do not always prove that such technologies are the best way to solve problems. The network, however, is the bridging metaphor between the social and the technological.
  18. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.00
    0.0034625386 = product of:
      0.013850154 = sum of:
        0.013850154 = product of:
          0.027700309 = sum of:
            0.027700309 = weight(_text_:software in 1981) [ClassicSimilarity], result of:
              0.027700309 = score(doc=1981,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.15340936 = fieldWeight in 1981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1981)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
  19. XML data management : native XML and XML-enabled database systems (2003) 0.00
    0.0034270247 = product of:
      0.013708099 = sum of:
        0.013708099 = product of:
          0.027416198 = sum of:
            0.027416198 = weight(_text_:software in 2073) [ClassicSimilarity], result of:
              0.027416198 = score(doc=2073,freq=6.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.1518359 = fieldWeight in 2073, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    There is some debate over what exactly constitutes a native XML database. Bourret (2003) favors the wider definition; other authors such as the Butler Group (2002) restrict the use of the term to databases systems designed and built solely for storage and manipulation of XML. Two examples of the lauer (Tamino and eXist) are covered in detailed chapters here but also included in this section is the embedded XML database system, Berkeley DB XML, considered by makers Sleepycat Software to be "native" in that it is capable of storing XML natively but built an top of the Berkeley DB engine. To the uninitiated, the revelation that schemas and DTDs are not required by either Tamino or eXist might seem a little strange. Tamino implements "loose coupling" where the validation behavior can be set to "strict," "lax" (i.e., apply only to parts of a document) or "skip" (no checking), in eXist, schemas are simply optional. Many DTDs and schemas evolve as the XML documents are acquired and so these may adhere to slightly different schemas, thus the database should support queries an similar documents that do not share the same structune. In fact, because of the difficulties in mappings between XML and database (especially relational) schemas native XML databases are very useful for storage of semi-structured data, a point not made in either chapter. The chapter an embedded databases represents a "third way," being neither native nor of the XML-enabled relational type. These databases run inside purpose-written applications and are accessed via an API or similar, meaning that the application developer does not need to access database files at the operating system level but can rely an supplied routines to, for example, fetch and update database records. Thus, end-users do not use the databases directly; the applications do not usually include ad hoc end-user query tools. This property renders embedded databases unsuitable for a large number of situations and they have become very much a niche market but this market is growing rapidly. Embedded databases share an address space with the application so the overhead of calls to the server is reduced, they also confer advantages in that they are easier to deploy, manage and administer compared to a conventional client-server solution. This chapter is a very good introduction to the subject, primers an generic embedded databases and embedded XML databases are helpfully provided before the author moves to an overview of the Open Source Berkeley system. Building an embedded database application makes far greater demands an the software developer and the remainder of the chapter is devoted to consideration of these programming issues.
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  20. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.00
    0.003337795 = product of:
      0.01335118 = sum of:
        0.01335118 = product of:
          0.02670236 = sum of:
            0.02670236 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.02670236 = score(doc=150,freq=6.0), product of:
                0.15938555 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045514934 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22

Languages

Subjects

Classifications