Search (5926 results, page 2 of 297)

  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.07
    0.066988476 = product of:
      0.15072407 = sum of:
        0.08389453 = weight(_text_:applications in 2652) [ClassicSimilarity], result of:
          0.08389453 = score(doc=2652,freq=8.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 2652, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.0140020205 = weight(_text_:of in 2652) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=2652,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 2652, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.034061253 = weight(_text_:software in 2652) [ClassicSimilarity], result of:
          0.034061253 = score(doc=2652,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 2652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.018766273 = product of:
          0.037532546 = sum of:
            0.037532546 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.037532546 = score(doc=2652,freq=4.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.27358043 = fieldWeight in 2652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2652)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Software for Indexing (2003) 0.07
    0.06672908 = product of:
      0.15014043 = sum of:
        0.017750802 = weight(_text_:of in 2294) [ClassicSimilarity], result of:
          0.017750802 = score(doc=2294,freq=90.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28974813 = fieldWeight in 2294, product of:
              9.486833 = tf(freq=90.0), with freq of:
                90.0 = termFreq=90.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
        0.010219917 = weight(_text_:systems in 2294) [ClassicSimilarity], result of:
          0.010219917 = score(doc=2294,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.08488525 = fieldWeight in 2294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
        0.096339785 = weight(_text_:software in 2294) [ClassicSimilarity], result of:
          0.096339785 = score(doc=2294,freq=64.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.6198675 = fieldWeight in 2294, product of:
              8.0 = tf(freq=64.0), with freq of:
                64.0 = termFreq=64.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
        0.025829926 = product of:
          0.051659852 = sum of:
            0.051659852 = weight(_text_:packages in 2294) [ClassicSimilarity], result of:
              0.051659852 = score(doc=2294,freq=2.0), product of:
                0.2706874 = queryWeight, product of:
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.03917671 = queryNorm
                0.1908469 = fieldWeight in 2294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.115-116 (C. Jacobs): "This collection of articles by indexing practitioners, software designers and vendors is divided into five sections: Dedicated Software, Embedded Software, Online and Web Indexing Software, Database and Image Software, and Voice-activated, Automatic, and Machine-aided Software. This diversity is its strength. Part 1 is introduced by two chapters an choosing dedicated software, highlighting the issues involved and providing tips an evaluating requirements. The second chapter includes a fourteen page chart that analyzes the attributes of Authex Plus, three versions of CINDEX 1.5, MACREX 7, two versions of SKY Index (5.1 and 6.0) and wINDEX. The lasting value in this chart is its utility in making the prospective user aware of the various attributes/capabilities that are possible and that should be considered. The following chapters consist of 16 testimonials for these software packages, completed by a final chapter an specialized/customized software. The point is made that if a particular software function could increase your efficiency, it can probably be created. The chapters in Part 2, Embedded Software, go into a great deal more detail about how the programs work, and are less reviews than illustrations of functionality. Perhaps this is because they are not really stand-alones, but are functions within, or add-ons used with larger word processing or publishing programs. The software considered are Microsoft Word, FrameMaker, PageMaker, IndexTension 3.1.5 that is used with QuarkXPress, and Index Tools Professional and IXgen that are used with FrameMaker. The advantages and disadvantages of embedded indexing are made very clear, but the actual illustrations are difficult to follow if one has not worked at all with embedded software. Nonetheless, the section is valuable as it highlights issues and provides pointers an solutions to embedded indexing problems.
    Part 3, Online and Web Indexing Software, opens with a chapter in which the functionalities of HTML/Prep, HTML Indexer, and RoboHELP HTML Edition are compared. The following three chapters look at them individually. This section helps clarify the basic types of non-database web indexing - that used for back-of-the-book style indexes, and that used for online help indexes. The first chapter of Part 4, Database and image software, begins with a good discussion of what database indexing is, but falls to carry through with any listing of general characteristics, problems and attributes that should be considered when choosing database indexing software. It does include the results of an informal survey an the Yahoogroups database indexing site, as well as three short Gase studies an database indexing projects. The survey provides interesting information about freelancing, but it is not very useful if you are trying to gather information about different software. For example, the most common type of software used by those surveyed turns out to be word-processing software. This seems an odd/awkward choice, and it would have been helpful to know how and why the non-specialized software is being used. The survey serves as a snapshot of a particular segment of database indexing practice, but is not helpful if you are thinking about purchasing, adapting, or commissioning software. The three case studies give an idea of the complexity of database indexing and there is a helpful bibliography.
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
    Imprint
    Medford, NJ : Information Today, in association with the American Society of Indexers
  3. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.07
    0.06514208 = product of:
      0.11725573 = sum of:
        0.06278092 = weight(_text_:applications in 149) [ClassicSimilarity], result of:
          0.06278092 = score(doc=149,freq=28.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.3639983 = fieldWeight in 149, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.017837372 = weight(_text_:of in 149) [ClassicSimilarity], result of:
          0.017837372 = score(doc=149,freq=142.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29116124 = fieldWeight in 149, product of:
              11.916375 = tf(freq=142.0), with freq of:
                142.0 = termFreq=142.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.008175933 = weight(_text_:systems in 149) [ClassicSimilarity], result of:
          0.008175933 = score(doc=149,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.0679082 = fieldWeight in 149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.019267956 = weight(_text_:software in 149) [ClassicSimilarity], result of:
          0.019267956 = score(doc=149,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.123973496 = fieldWeight in 149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.009193558 = product of:
          0.018387116 = sum of:
            0.018387116 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
              0.018387116 = score(doc=149,freq=6.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.1340265 = fieldWeight in 149, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=149)
          0.5 = coord(1/2)
      0.5555556 = coord(5/9)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Rez. in: JASIST 57(2006) no.14, S.1979-1980 (J.G. Williams): "The Introduction and Part I of this book presents the world of computing with a historical and philosophical overview of computers, computer applications, networks, the World Wide Web, and eBusiness based on the notion that the real world places constraints on the application of these technologies and without a formalized approach, the benefits of these technologies cannot be realized. The concepts of real world constraints and the need for formalization are used as the cornerstones for a building-block approach for helping the reader understand computing, networking, the World Wide Web, and the applications that use these technologies as well as all the possibilities that these technologies hold for the future. The author's building block approach to understanding computing, networking and application building makes the book useful for science, business, and engineering students taking an introductory computing course and for social science students who want to understand more about the social impact of computers, the Internet, and Web technology. It is useful as well for managers and designers of Web and ebusiness applications, and for the general public who are interested in understanding how these technologies may impact their lives, their jobs, and the social context in which they live and work. The book does assume some experience and terminology in using PCs and the Internet but is not intended for computer science students, although they could benefit from the philosophical basis and the diverse viewpoints presented. The author uses numerous analogies from domains outside the area of computing to illustrate concepts and points of view that make the content understandable as well as interesting to individuals without any in-depth knowledge of computing, networking, software engineering, system design, ebusiness, and Web design. These analogies include interesting real-world events ranging from the beginning of railroads, to Henry Ford's mass produced automobile, to the European Space Agency's loss of the 7 billion dollar Adriane rocket, to travel agency booking, to medical systems, to banking, to expanding democracy. The book gives the pros and cons of the possibilities offered by the Internet and the Web by presenting numerous examples and an analysis of the pros and cons of these technologies for the examples provided. The author shows, in an interesting manner, how the new economy based on the Internet and the Web affects society and business life on a worldwide basis now and how it will affect the future, and how society can take advantage of the opportunities that the Internet and the Web offer.
    The book is organized into six sections or parts with several chapters within each part. Part 1, does a good job of building an understanding some of the historical aspects of computing and why formalization is important for building computer-based applications. A distinction is made between formalized and unformalized data, processes, and procedures, which the author cleverly uses to show how the level of formalization of data, processes, and procedures determines the functionality of computer applications. Part I also discusses the types of data that can be represented in symbolic form, which is crucial to using computer and networking technology in a virtual environment. This part also discusses the technical and cultural constraints upon computing, networking, and web technologies with many interesting examples. The cultural constraints discussed range from copyright to privacy issues. Part 1 is critical to understanding the author's point of view and discussions in other sections of the book. The discussion on machine intelligence and natural language processing is particularly well done. Part 2 discusses the fundamental concepts and standards of the Internet and Web. Part 3 introduces the need for formalization to construct ebusiness applications in the business-to-consumer category (B2C). There are many good and interesting examples of these B2C applications and the associated analyses of them using the concepts introduced in Parts I and 2 of the book. Part 4 examines the formalization of business-to-business (B2B) applications and discusses the standards that are needed to transmit data with a high level of formalization. Part 5 is a rather fascinating discussion of future possibilities and Part 6 presents a concise summary and conclusion. The book covers a wide array of subjects in the computing, networking, and Web areas and although all of them are presented in an interesting style, some subjects may be more relevant and useful to individuals depending on their background or academic discipline. Part 1 is relevant to all potential readers no matter what their background or academic discipline but Part 2 is a little more technical; although most people with an information technology or computer science background will not find much new here with the exception of the chapters on "Dynamic Web Pages" and "Embedded Scripts." Other readers will find this section informative and useful for understanding other parts of the book. Part 3 does not offer individuals with a background in computing, networking, or information science much in addition to what they should already know, but the chapters on "Searching" and "Web Presence" may be useful because they present some interesting notions about using the Web. Part 3 gives an overview of B2C applications and is where the author provides examples of the difference between services that are completely symbolic and services that have both a symbolic portion and a physical portion. Part 4 of the book discusses B2B technology once again with many good examples. The chapter on "XML" in Part 4 is not appropriate for readers without a technical background. Part 5 is a teacher's dream because it offers a number of situations that can be used for classroom discussions or case studies independent of background or academic discipline.
    Each chapter provides suggestions for exercises and discussions, which makes the book useful as a textbook. The suggestions in the exercise and discussion section at the end of each chapter are simply delightful to read and provide a basis for some lively discussion and fun exercises by students. These exercises appear to be well thought out and are intended to highlight the content of the chapter. The notes at the end of chapters provide valuable data that help the reader to understand a topic or a reference to an entity that the reader may not know. Chapter 1 on "formalism," chapter 2 on "symbolic data," chapter 3 on "constraints on technology," and chapter 4 on "cultural constraints" are extremely well presented and every reader needs to read these chapters because they lay the foundation for most of the chapters that follow. The analogies, examples, and points of view presented make for some really interesting reading and lively debate and discussion. These chapters comprise Part 1 of the book and not only provide a foundation for the rest of the book but could be used alone as the basis of a social science course on computing, networking, and the Web. Chapters 5 and 6 on Internet protocols and the development of Web protocols may be more detailed and filled with more acronyms than the average person wants to deal with but content is presented with analogies and examples that make it easier to digest. Chapter 7 will capture most readers attention because it discusses how e-mail works and many of the issues with e-mail, which a majority of people in developed countries have dealt with. Chapter 8 is also one that most people will be interested in reading because it shows how Internet browsers work and the many issues such as security associated with these software entities. Chapter 9 discusses the what, why, and how of the World Wide Web, which is a lead-in to chapter 10 on "Searching the Web" and chapter 11 on "Organizing the Web-Portals," which are two chapters that even technically oriented people should read since it provides information that most people outside of information and library science are not likely to know.
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.
    The book is quite large with over 400 pages and covers a myriad of topics, which is probably more than any one course could cover but an instructor could pick and choose those chapters most appropriate to the course content. The book could be used for multiple courses by selecting the relevant topics. I enjoyed the first person, rather down to earth, writing style and the number of examples and analogies that the author presented. I believe most people could relate to the examples and situations presented by the author. As a teacher in Information Technology, the discussion questions at the end of the chapters and the case studies are a valuable resource as are the end of chapter notes. I highly recommend this book for an introductory course that combines computing, networking, the Web, and ebusiness for Business and Social Science students as well as an introductory course for students in Information Science, Library Science, and Computer Science. Likewise, I believe IT managers and Web page designers could benefit from selected chapters in the book."
  4. Hooland, S. van; Bontemps, Y.; Kaufman, S.: Answering the call for more accountability : applying data profiling to museum metadata (2008) 0.06
    0.064865164 = product of:
      0.14594662 = sum of:
        0.07118686 = weight(_text_:applications in 2644) [ClassicSimilarity], result of:
          0.07118686 = score(doc=2644,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 2644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.017962547 = weight(_text_:of in 2644) [ClassicSimilarity], result of:
          0.017962547 = score(doc=2644,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 2644, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.040873505 = weight(_text_:software in 2644) [ClassicSimilarity], result of:
          0.040873505 = score(doc=2644,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 2644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2644) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2644,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2644)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Although the issue of metadata quality is recognized as an important topic within the metadata research community, the cultural heritage sector has been slow to develop methodologies, guidelines and tools for addressing this topic in practice. This paper concentrates on metadata quality specifically within the museum sector and describes the potential of data-profiling techniques for metadata quality evaluation. A case study illustrates the application of a generalpurpose data-profiling tool on a large collection of metadata records from an ethnographic collection. After an analysis of the results of the case-study the paper reviews further steps in our research and presents the implementation of a metadata quality tool within an open-source collection management software.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Carvalho, J.: ¬An XML representation of the UNIMARC manual : a working prototype (2005) 0.06
    0.06458748 = product of:
      0.14532183 = sum of:
        0.041947264 = weight(_text_:applications in 4355) [ClassicSimilarity], result of:
          0.041947264 = score(doc=4355,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 4355, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
        0.019801848 = weight(_text_:of in 4355) [ClassicSimilarity], result of:
          0.019801848 = score(doc=4355,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32322758 = fieldWeight in 4355, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
        0.03540283 = weight(_text_:systems in 4355) [ClassicSimilarity], result of:
          0.03540283 = score(doc=4355,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 4355, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
        0.048169892 = weight(_text_:software in 4355) [ClassicSimilarity], result of:
          0.048169892 = score(doc=4355,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 4355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
      0.44444445 = coord(4/9)
    
    Abstract
    The UNIMARC manual defines a standard for the formal representation of bibliographic information. For that purpose the UNIMARC manual contains different types of information: structural rules that define that records are composed of a leader, a set of control fields and a set of data fields, with certain syntactic characteristics; content rules, that define required fields and acceptable values for various components of the record; and, finally, examples, explanatory notes, cross references to other points of the manual. Much of this information must find its way into computer systems where it will be used to validate records, produce indexes, adequately format records for display and, in some cases, provide human readable help. Providing the UNIMARC manual in XML greatly simplifies the full implementation of the format in computer systems. Our goal was to produce a formal representation of the UNIMARC format, so that the standard can be incorporated in software systems in a transparent way. The outcome is an XML representation of the UNIMARC manual, which can be processed automatically by applications that need to enforce the format rules, provide help information, or vocabularies. We developed a scheme for the UNIMARC manual and a set of software tools that demonstrate its usage.
    Footnote
    Vortrag, World Library and Information Congress: 71th IFLA General Conference and Council "Libraries - A voyage of discovery", August 14th - 18th 2005, Oslo, Norway.
  6. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.06
    0.06319208 = product of:
      0.14218217 = sum of:
        0.08389453 = weight(_text_:applications in 2541) [ClassicSimilarity], result of:
          0.08389453 = score(doc=2541,freq=8.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 2541, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.019081537 = weight(_text_:of in 2541) [ClassicSimilarity], result of:
          0.019081537 = score(doc=2541,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 2541, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.020439833 = weight(_text_:systems in 2541) [ClassicSimilarity], result of:
          0.020439833 = score(doc=2541,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.018766273 = product of:
          0.037532546 = sum of:
            0.037532546 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.037532546 = score(doc=2541,freq=4.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  7. Almeida, M.B.; Barbosa, R.R.: Ontologies in knowledge management support : a case study (2009) 0.06
    0.06226595 = product of:
      0.1400984 = sum of:
        0.050336715 = weight(_text_:applications in 3117) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3117,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3117)
        0.014200641 = weight(_text_:of in 3117) [ClassicSimilarity], result of:
          0.014200641 = score(doc=3117,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 3117, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3117)
        0.034687545 = weight(_text_:systems in 3117) [ClassicSimilarity], result of:
          0.034687545 = score(doc=3117,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 3117, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3117)
        0.040873505 = weight(_text_:software in 3117) [ClassicSimilarity], result of:
          0.040873505 = score(doc=3117,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3117)
      0.44444445 = coord(4/9)
    
    Abstract
    Information and knowledge are true assets in modern organizations. In order to cope with the need to manage these assets, corporations have invested in a set of practices that are conventionally called knowledge management. This article presents a case study on the development and the evaluation of ontologies that was conducted within the scope of a knowledge management project undertaken by the second largest Brazilian energy utility. Ontologies have different applications and can be used in knowledge management, in information retrieval, and in information systems, to mention but a few. Within the information systems realm, ontologies are generally used as system models, but their usage has not been restricted to software development. We advocate that, once assessed as to its content, an ontology may provide benefits to corporate communication and, therefore, provide support to knowledge management initiatives. We expect to further contribute by describing possibilities for the application of ontologies within organizational environments.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2032-2047
  8. Chein, M.; Genest, D.: CGs applications : where are we 7 years after the first ICCS? (2000) 0.06
    0.06213688 = product of:
      0.18641064 = sum of:
        0.13131571 = weight(_text_:applications in 5075) [ClassicSimilarity], result of:
          0.13131571 = score(doc=5075,freq=10.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.76135707 = fieldWeight in 5075, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5075)
        0.0074091726 = weight(_text_:of in 5075) [ClassicSimilarity], result of:
          0.0074091726 = score(doc=5075,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.120940685 = fieldWeight in 5075, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5075)
        0.047685754 = weight(_text_:software in 5075) [ClassicSimilarity], result of:
          0.047685754 = score(doc=5075,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 5075, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5075)
      0.33333334 = coord(3/9)
    
    Abstract
    The traditional distinction between theories (developed to tackle theoretical problems) and applications (based on theories and realized to help a user) is blurred in computer science. To experiment their theories, computer scientists often write programs. This paper focuses on the features that make such a program an application (also in its software engineering meaning). The discussion is more specifically aimed at artificial intelligence applications and especially conceptual graphs applications presented in ICCS papers, and the importance of applications for a scientific domain
  9. Devedzic, V.: Semantic Web and education (2006) 0.06
    0.060800433 = product of:
      0.13680097 = sum of:
        0.050336715 = weight(_text_:applications in 5995) [ClassicSimilarity], result of:
          0.050336715 = score(doc=5995,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 5995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=5995)
        0.021062955 = weight(_text_:of in 5995) [ClassicSimilarity], result of:
          0.021062955 = score(doc=5995,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34381276 = fieldWeight in 5995, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5995)
        0.0245278 = weight(_text_:systems in 5995) [ClassicSimilarity], result of:
          0.0245278 = score(doc=5995,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 5995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5995)
        0.040873505 = weight(_text_:software in 5995) [ClassicSimilarity], result of:
          0.040873505 = score(doc=5995,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 5995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=5995)
      0.44444445 = coord(4/9)
    
    Abstract
    The first section of "Semantic Web and Education" surveys the basic aspects and features of the Semantic Web. After this basic review, the book turns its focus to its primary topic of how Semantic Web developments can be used to build attractive and more successful education applications. The book analytically discusses the technical areas of architecture, metadata, learning objects, software engineering trends, and more. Integrated with these technical topics are the examinations of learning-oriented topics such as learner modeling, collaborative learning, learning management, learning communities, ontological engineering of web-based learning, and related topics. The result is a thorough and highly useful presentation on the confluence of the technical aspects of the Semantic Web and the field of Education or the art of teaching. The book will be of considerable interest to researchers and students in the fields Information Systems, Computer Science, and Education.
  10. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.06
    0.060443893 = product of:
      0.13599876 = sum of:
        0.07118686 = weight(_text_:applications in 2623) [ClassicSimilarity], result of:
          0.07118686 = score(doc=2623,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 2623, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.014200641 = weight(_text_:of in 2623) [ClassicSimilarity], result of:
          0.014200641 = score(doc=2623,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 2623, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.034687545 = weight(_text_:systems in 2623) [ClassicSimilarity], result of:
          0.034687545 = score(doc=2623,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 2623, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2623,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Neelameghan, A.; Iyer, H.: Information organization to assist knowledge discovery : case studies with non-bibliographic databases (2003) 0.06
    0.05935433 = product of:
      0.13354725 = sum of:
        0.041947264 = weight(_text_:applications in 5522) [ClassicSimilarity], result of:
          0.041947264 = score(doc=5522,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 5522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5522)
        0.011833867 = weight(_text_:of in 5522) [ClassicSimilarity], result of:
          0.011833867 = score(doc=5522,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19316542 = fieldWeight in 5522, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5522)
        0.045704857 = weight(_text_:systems in 5522) [ClassicSimilarity], result of:
          0.045704857 = score(doc=5522,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.37961838 = fieldWeight in 5522, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5522)
        0.034061253 = weight(_text_:software in 5522) [ClassicSimilarity], result of:
          0.034061253 = score(doc=5522,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 5522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5522)
      0.44444445 = coord(4/9)
    
    Abstract
    Enumerates various paths that may lead to knowledge discovery (KD). Most of these paths begin from knowing what exists. To know what exists about an entity requires comprehensively assembling relevant data and information, in-depth analysis of the information, and identifying relations among the concepts in the related and even apparently unrelated subjects. Provision has to be made to reorganize and synthesize the information retrieved and/or that obtained through observation, experiment, survey, etc. Information and communication technologies (ICT) have considerably augmented the capabilities of information systems. Such ICT applications may range from the simple to sophisticated computerized systems, with data gathered using aerial photography, remote sensing, satellite imagery, large radar and planetary telescopes and many other instrument records of phenomena, as well as downloading via the Internet. While classification helps in data prospecting and data mining, for it to assist the KD process effectively it has to be supplemented with good indexes, hypertext links, access to statistical and modeling techniques, etc. Computer software assists text analysis, complex data manipulation, computation, statistical analysis, concept mapping, etc. But manual information systems can also assist KD. Enumerates several prerequisites to KD and relevant tools and techniques to be incorporated into information support systems. Presents case studies of information systems and services that assisted KD.
  12. Mineau, G.W.: ¬The engineering of a CG-based system : fundamental issues (2000) 0.06
    0.058542717 = product of:
      0.13172111 = sum of:
        0.07118686 = weight(_text_:applications in 5077) [ClassicSimilarity], result of:
          0.07118686 = score(doc=5077,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 5077, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=5077)
        0.020082738 = weight(_text_:of in 5077) [ClassicSimilarity], result of:
          0.020082738 = score(doc=5077,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 5077, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5077)
        0.0245278 = weight(_text_:systems in 5077) [ClassicSimilarity], result of:
          0.0245278 = score(doc=5077,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 5077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5077)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 5077) [ClassicSimilarity], result of:
              0.031847417 = score(doc=5077,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 5077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5077)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    This paper presents some important issues that a knowledge engineer must consider when developing a conceptual graph (CG) based system, particularly in the context of a deductive data/knowledge base. These issues entail fundamental representation choices that must be made prior to the development of the system. Of course, inevitable consequences follow and delimit the scope of the system in terms of its representational and inferential capabilities. However, for industrial development, a less ambitious but realistic approach is often more suited to the kind of constraints usually imposed by the market place: feasibility, scalability, simplicity, interpretability, portability, partial knowledge, time to market, etc. This paper presents and discusses some of these issues and sets the stage for an in-depth discussion pertaining to the development of CG-based systems for industrial applications, particularly for applications where a CG system provides the conceptual level of knowledge organization functionalities required by an information system
    Date
    3. 9.2000 15:22:33
  13. Will, L.: Thesaurus management software (2009) 0.06
    0.05790017 = product of:
      0.17370051 = sum of:
        0.05872617 = weight(_text_:applications in 3892) [ClassicSimilarity], result of:
          0.05872617 = score(doc=3892,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 3892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3892)
        0.01960283 = weight(_text_:of in 3892) [ClassicSimilarity], result of:
          0.01960283 = score(doc=3892,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 3892, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3892)
        0.09537151 = weight(_text_:software in 3892) [ClassicSimilarity], result of:
          0.09537151 = score(doc=3892,freq=8.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.61363745 = fieldWeight in 3892, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3892)
      0.33333334 = coord(3/9)
    
    Abstract
    Thesaurus data structures and exchange formats (ways of tagging and encoding thesauri for transfer between computer applications) are discussed. Single- and multiple-user thesaurus software is functionally similar, apart from scale. Several lists of requirements for such software have been published, and important aspects are summarized here, including input, editing, output, and the interfaces used by indexers and searchers. The way in which thesaurus software may be extended to cover other types of controlled vocabularies is covered briefly, followed by issues that arise in management and updating of thesauri, including changes to collections of documents indexed by previous versions and the mapping and merging of thesauri to provide a common search interface.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  14. Corbett, L.E.: Serials: review of the literature 2000-2003 (2006) 0.06
    0.055950508 = product of:
      0.16785152 = sum of:
        0.017552461 = weight(_text_:of in 1088) [ClassicSimilarity], result of:
          0.017552461 = score(doc=1088,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28651062 = fieldWeight in 1088, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.020439833 = weight(_text_:systems in 1088) [ClassicSimilarity], result of:
          0.020439833 = score(doc=1088,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.12985922 = sum of:
          0.103319705 = weight(_text_:packages in 1088) [ClassicSimilarity], result of:
            0.103319705 = score(doc=1088,freq=2.0), product of:
              0.2706874 = queryWeight, product of:
                6.9093957 = idf(docFreq=119, maxDocs=44218)
                0.03917671 = queryNorm
              0.3816938 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.9093957 = idf(docFreq=119, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
          0.026539518 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.026539518 = score(doc=1088,freq=2.0), product of:
              0.13719016 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03917671 = queryNorm
              0.19345059 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
      0.33333334 = coord(3/9)
    
    Abstract
    The topic of electronic journals (e-journals) dominated the serials literature from 2000 to 2003. This review is limited to the events and issues within the broad topics of cost, management, and archiving. Coverage of cost includes such initiatives as PEAK, JACC, BioMed Central, SPARC, open access, the "Big Deal," and "going e-only." Librarians combated the continued price increase trend for journals, fueled in part by publisher mergers, with the economies found with bundled packages and consortial subscriptions. Serials management topics include usage statistics; core title lists; staffing needs; the "A-Z list" and other services from such companies as Serials Solutions; "deep linking"; link resolvers such as SFX; development of standards or guidelines, such as COUNTER and ERMI; tracking of license terms; vendor mergers; and the demise of integrated library systems and a subscription agent's bankruptcy. Librarians archived print volumes in storage facilities due to space shortages. Librarians and publishers struggled with electronic archiving concepts, discussing questions of who, where, and how. Projects such as LOCKSS tested potential solutions, but missing online content due to the Tasini court case and retractions posed more archiving difficulties. The serials literature captured much of the upheaval resulting from the rapid pace of changes, many linked to the advent of e-journals.
    Date
    10. 9.2000 17:38:22
  15. Tudhope, D.: New Applications of Knowledge Organization Systems : introduction to a special issue (2004) 0.06
    0.05589719 = product of:
      0.16769157 = sum of:
        0.10067343 = weight(_text_:applications in 2344) [ClassicSimilarity], result of:
          0.10067343 = score(doc=2344,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5836958 = fieldWeight in 2344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.09375 = fieldNorm(doc=2344)
        0.017962547 = weight(_text_:of in 2344) [ClassicSimilarity], result of:
          0.017962547 = score(doc=2344,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 2344, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2344)
        0.0490556 = weight(_text_:systems in 2344) [ClassicSimilarity], result of:
          0.0490556 = score(doc=2344,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4074492 = fieldWeight in 2344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.09375 = fieldNorm(doc=2344)
      0.33333334 = coord(3/9)
    
    Footnote
    Journal of digital information. 4(2004) no.4.
  16. Wolfram, D.: Applied informetrics for information retrieval research (2003) 0.06
    0.05589719 = product of:
      0.16769157 = sum of:
        0.10067343 = weight(_text_:applications in 4589) [ClassicSimilarity], result of:
          0.10067343 = score(doc=4589,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5836958 = fieldWeight in 4589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
        0.017962547 = weight(_text_:of in 4589) [ClassicSimilarity], result of:
          0.017962547 = score(doc=4589,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 4589, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
        0.0490556 = weight(_text_:systems in 4589) [ClassicSimilarity], result of:
          0.0490556 = score(doc=4589,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4074492 = fieldWeight in 4589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
      0.33333334 = coord(3/9)
    
    Abstract
    The author demonstrates how informetric analysis of information retrieval system content and use provides valuable insights that have applications for the modelling, design, and evaluation of information retrieval systems.
  17. Resource Description Framework (RDF) (2004) 0.06
    0.055449694 = product of:
      0.16634908 = sum of:
        0.09491582 = weight(_text_:applications in 3063) [ClassicSimilarity], result of:
          0.09491582 = score(doc=3063,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5503137 = fieldWeight in 3063, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
        0.016935252 = weight(_text_:of in 3063) [ClassicSimilarity], result of:
          0.016935252 = score(doc=3063,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 3063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
        0.054498006 = weight(_text_:software in 3063) [ClassicSimilarity], result of:
          0.054498006 = score(doc=3063,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
      0.33333334 = coord(3/9)
    
    Abstract
    The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web. The W3C Semantic Web Activity Statement explains W3C's plans for RDF, including the RDF Core WG, Web Ontology and the RDF Interest Group.
    Content
    Specifications - Bookmarks (Intro * Articles) - Projects and Applications - Developer tools - Schemas - Related Technologies - Timeline
  18. Lauw, H.W.; Lim, E.-P.: Web social mining (2009) 0.05
    0.054078512 = product of:
      0.16223553 = sum of:
        0.10171671 = weight(_text_:applications in 3905) [ClassicSimilarity], result of:
          0.10171671 = score(doc=3905,freq=6.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5897447 = fieldWeight in 3905, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3905)
        0.0128330635 = weight(_text_:of in 3905) [ClassicSimilarity], result of:
          0.0128330635 = score(doc=3905,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20947541 = fieldWeight in 3905, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3905)
        0.047685754 = weight(_text_:software in 3905) [ClassicSimilarity], result of:
          0.047685754 = score(doc=3905,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 3905, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3905)
      0.33333334 = coord(3/9)
    
    Abstract
    With increasing user presence in the Web and Web 2.0, Web social mining becomes an important and challenging task that finds a wide range of new applications relevant to e-commerce and social software. In this entry, we describe three Web social mining topics, namely, social network discovery, social network analysis, and social network applications. The essential concepts, models, and techniques of these Web social mining topics will be surveyed so as to establish the basic foundation for developing novel applications and for conducting research.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  19. Developments in applied artificial intelligence : proceedings / 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, Loughborough, UK, June 23 - 26, 2003 (2003) 0.05
    0.05399434 = product of:
      0.16198301 = sum of:
        0.10274939 = weight(_text_:applications in 441) [ClassicSimilarity], result of:
          0.10274939 = score(doc=441,freq=12.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5957321 = fieldWeight in 441, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
        0.009166474 = weight(_text_:of in 441) [ClassicSimilarity], result of:
          0.009166474 = score(doc=441,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.1496253 = fieldWeight in 441, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
        0.050067157 = weight(_text_:systems in 441) [ClassicSimilarity], result of:
          0.050067157 = score(doc=441,freq=12.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41585106 = fieldWeight in 441, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
      0.33333334 = coord(3/9)
    
    Abstract
    This book constitutes the refereed proceedings of the 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, held in Loughborough, UK in June 2003. The 81 revised full papers presented were carefully reviewed and selected from more than 140 submissions. Among the topics addressed are soft computing, fuzzy logic, diagnosis, knowledge representation, knowledge management, automated reasoning, machine learning, planning and scheduling, evolutionary computation, computer vision, agent systems, algorithmic learning, tutoring systems, financial analysis, etc.
    LCSH
    Artificial intelligence / Industrial applications / Congresses
    Expert systems (Computer science) / Industrial applications / Congresses
    Subject
    Artificial intelligence / Industrial applications / Congresses
    Expert systems (Computer science) / Industrial applications / Congresses
  20. MacFarlane, A.: On open source IR (2003) 0.05
    0.053804494 = product of:
      0.16141348 = sum of:
        0.018934188 = weight(_text_:of in 2010) [ClassicSimilarity], result of:
          0.018934188 = score(doc=2010,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3090647 = fieldWeight in 2010, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2010)
        0.06540746 = weight(_text_:systems in 2010) [ClassicSimilarity], result of:
          0.06540746 = score(doc=2010,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 2010, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=2010)
        0.07707182 = weight(_text_:software in 2010) [ClassicSimilarity], result of:
          0.07707182 = score(doc=2010,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.49589399 = fieldWeight in 2010, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=2010)
      0.33333334 = coord(3/9)
    
    Abstract
    Open source software development is becoming increasingly popular as a way of producing software, due to a number of factors. It is argued in this paper that these factors may have a significant impact on the future of information retrieval (IR) systems, and that it is desirable that these systems are made open to all. Some problems are outlined that may prevent the uptake of open source IR systems and a number of open source IR systems are described.

Authors

Languages

Types

  • a 5128
  • m 457
  • el 442
  • s 174
  • b 38
  • r 23
  • x 18
  • i 17
  • n 16
  • p 16
  • More… Less…

Themes

Subjects

Classifications