Search (2034 results, page 1 of 102)

  • × year_i:[2000 TO 2010}
  1. Houston, R.D.; Harmon, E.G.: Re-envisioning the information concept : systematic definitions (2002) 0.17
    0.16634454 = product of:
      0.2495168 = sum of:
        0.20287897 = weight(_text_:systematic in 136) [ClassicSimilarity], result of:
          0.20287897 = score(doc=136,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.71443415 = fieldWeight in 136, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=136)
        0.04663783 = product of:
          0.09327566 = sum of:
            0.09327566 = weight(_text_:22 in 136) [ClassicSimilarity], result of:
              0.09327566 = score(doc=136,freq=6.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.536106 = fieldWeight in 136, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper suggests a framework and systematic definitions for 6 words commonly used in dthe field of information science: data, information, knowledge, wisdom, inspiration, and intelligence. We intend these definitions to lead to a quantification of information science, a quantification that will enable their measurement, manipulastion, and prediction.
    Date
    22. 2.2007 18:56:23
    22. 2.2007 19:22:13
  2. Koch, T.: Quality-controlled subject gateways : definitions, typologies, empirical overview (2000) 0.15
    0.15263343 = product of:
      0.22895013 = sum of:
        0.12552495 = weight(_text_:systematic in 631) [ClassicSimilarity], result of:
          0.12552495 = score(doc=631,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=631)
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 631) [ClassicSimilarity], result of:
            0.05630404 = score(doc=631,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 631, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=631)
          0.047121134 = weight(_text_:22 in 631) [ClassicSimilarity], result of:
            0.047121134 = score(doc=631,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 631, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=631)
      0.6666667 = coord(2/3)
    
    Abstract
    'Quality-controlled subject gateways' are Internet services which apply a rich set of quality measures to support systematic resource discovery. Considerable manual effort is used to secure a selection of resources which meet quality criteria and to display a rich description of these resources with standards-based metadata. Regular checking and updating ensure good collection management. A main goal is to provide a high quality of subject access through indexing resources using controlled vocabularies and by offering a deep classification structure for advanced searching and browsing. This article provides an initial empirical overview of existing services of this kind, their approaches and technologies, based on proposed working definitions and typologies of subject gateways
    Date
    22. 6.2002 19:37:55
  3. Dousa, T.: Everything Old is New Again : Perspectivism and Polyhierarchy in Julius O. Kaiser's Theory of Systematic Indexing (2007) 0.15
    0.15238476 = product of:
      0.22857714 = sum of:
        0.17932136 = weight(_text_:systematic in 4835) [ClassicSimilarity], result of:
          0.17932136 = score(doc=4835,freq=8.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.6314765 = fieldWeight in 4835, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4835)
        0.049255773 = product of:
          0.09851155 = sum of:
            0.09851155 = weight(_text_:indexing in 4835) [ClassicSimilarity], result of:
              0.09851155 = score(doc=4835,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.51797354 = fieldWeight in 4835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the early years of the 20th century, Julius Otto Kaiser (1868-1927), a special librarian and indexer of technical literature, developed a method of knowledge organization (KO) known as systematic indexing. Certain elements of the method-its stipulation that all indexing terms be divided into fundamental categories "concretes", "countries", and "processes", which are then to be synthesized into indexing "statements" formulated according to strict rules of citation order-have long been recognized as precursors to key principles of the theory of faceted classification. However, other, less well-known elements of the method may prove no less interesting to practitioners of KO. In particular, two aspects of systematic indexing seem to prefigure current trends in KO: (1) a perspectivist outlook that rejects universal classifications in favor of information organization systems customized to reflect local needs and (2) the incorporation of index terms extracted from source documents into a polyhierarchical taxonomical structure. Kaiser's perspectivism anticipates postmodern theories of KO, while his principled use of polyhierarchy to organize terms derived from the language of source documents provides a potentially fruitful model that can inform current discussions about harvesting natural-language terms, such as tags, and incorporating them into a flexibly structured controlled vocabulary.
    Object
    Kaiser systematic indexing
  4. Dousa, T.M.: Empirical observation, rational structures, and pragmatist aims : epistemology and method in Julius Otto Kaiser's theory of systematic indexing (2008) 0.15
    0.15210077 = product of:
      0.22815116 = sum of:
        0.18635625 = weight(_text_:systematic in 2508) [ClassicSimilarity], result of:
          0.18635625 = score(doc=2508,freq=6.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.6562497 = fieldWeight in 2508, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=2508)
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 2508) [ClassicSimilarity], result of:
              0.083589815 = score(doc=2508,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 2508, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2508)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Hjoerland's typology of the epistemological positions underlying methods for designing KO systems recognizes four basic epistemological positions: empiricism, rationalism, historicism, and pragmatism. Application of this typology to close analysis of Julius Otto Kaiser's theory of systematic indexing shows that his epistemological and methodological positions were hybrid in nature. Kaiser's epistemology was primarily empiricist and pragmatist in nature, whereas his methodology was pragmatist in aim but rationalist in mechanics. Unexpected synergy between the pragmatist and rationalist elements of Kaiser's methodology is evidenced by his stated motivations for the admission of polyhierarchy into syndetic structure. The application of Hjørland's typology to similar analyses of other KO systems may uncover other cases of epistemological-methodological eclecticism and synergy.
    Object
    Kaisers systematic indexing
  5. Anderson, J.D.; Pérez-Carballo, J.: Library of Congress Subject Headings (LCSH) (2009) 0.13
    0.13082865 = product of:
      0.19624296 = sum of:
        0.10759281 = weight(_text_:systematic in 3837) [ClassicSimilarity], result of:
          0.10759281 = score(doc=3837,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 3837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=3837)
        0.08865015 = sum of:
          0.048260607 = weight(_text_:indexing in 3837) [ClassicSimilarity], result of:
            0.048260607 = score(doc=3837,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.2537542 = fieldWeight in 3837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=3837)
          0.04038954 = weight(_text_:22 in 3837) [ClassicSimilarity], result of:
            0.04038954 = score(doc=3837,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 3837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3837)
      0.6666667 = coord(2/3)
    
    Abstract
    Library of Congress Subject Headings (LSCH), which celebrated its 100th birthday in 1998, is the largest cataloging and indexing language in the world for the indication of the topics and formats of books and similar publications. It consists of a controlled list of main headings, many with subdivisions, with a rich system of cross references. It is supported by the U.S. government, and undergoes systematic revision. In recent decades its managers have begun to confront challenges such as biased terminology, complicated syntax (how terms are put together to form headings), and effective displays in electronic media. Many suggestions have been made for its improvement, including moving to a fully faceted system.
    Date
    27. 8.2011 14:22:13
  6. Chen, H.-H.; Lin, W.-C.; Yang, C.; Lin, W.-H.: Translating-transliterating named entities for multilingual information access (2006) 0.10
    0.09939035 = product of:
      0.14908552 = sum of:
        0.12552495 = weight(_text_:systematic in 1080) [ClassicSimilarity], result of:
          0.12552495 = score(doc=1080,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 1080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1080)
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.047121134 = score(doc=1080,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Named entities are major constituents of a document but are usually unknown words. This work proposes a systematic way of dealing with formulation, transformation, translation, and transliteration of multilingual-named entities. The rules and similarity matrices for translation and transliteration are learned automatically from parallel-named-entity corpora. The results are applied in cross-language access to collections of images with captions. Experimental results demonstrate that the similarity-based transliteration of named entities is effective, and runs in which transliteration is considered outperform the runs in which it is neglected.
    Date
    4. 6.2006 19:52:22
  7. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.10
    0.09939035 = product of:
      0.14908552 = sum of:
        0.12552495 = weight(_text_:systematic in 5273) [ClassicSimilarity], result of:
          0.12552495 = score(doc=5273,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 5273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.047121134 = score(doc=5273,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
  8. Aalberg, T.; Haugen, F.B.; Husby, O.: ¬A Tool for Converting from MARC to FRBR (2006) 0.10
    0.09939035 = product of:
      0.14908552 = sum of:
        0.12552495 = weight(_text_:systematic in 2425) [ClassicSimilarity], result of:
          0.12552495 = score(doc=2425,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 2425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2425)
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 2425) [ClassicSimilarity], result of:
              0.047121134 = score(doc=2425,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 2425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2425)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The FRBR model is by many considered to be an important contribution to the next generation of bibliographic catalogues, but a major challenge for the library community is how to use this model on already existing MARC-based bibliographic catalogues. This problem requires a solution for the interpretation and conversion of MARC records, and a tool for this kind of conversion is developed as a part of the Norwegian BIBSYS FRBR project. The tool is based on a systematic approach to the interpretation and conversion process and is designed to be adaptable to the rules applied in different catalogues.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  9. Johnson, E.H.: Distributed thesaurus Web services (2004) 0.09
    0.08781542 = product of:
      0.13172312 = sum of:
        0.10759281 = weight(_text_:systematic in 4863) [ClassicSimilarity], result of:
          0.10759281 = score(doc=4863,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 4863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=4863)
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 4863) [ClassicSimilarity], result of:
              0.048260607 = score(doc=4863,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 4863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4863)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The World Wide Web and the use of HTML-based information displays has greatly increased access to online information sources, but at the same time limits the ways in which they can be used. By the same token, Web-based indexing and search engines give us access to the full text of online documents, but make it difficult to access them in any kind of organized, systematic way. For years before the advent of the Internet, lexicographers built weIl-structured subject thesauri to organize large collections of documents. These have since been converted into electronic form and even put online, but in ways that are largely uncoordinated and not useful for searching. This paper describes some of the ways in which XML-based Web services could be used to coordinate subject thesauri and other online vocabulary sources to create a "Thesauro-Web" that could be used by both searchers and indexers to improve subject access an the Internet.
  10. Parent, I.: IFLA Section on Cataloguing: "Why in the World?" (2000) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 188) [ClassicSimilarity], result of:
          0.10759281 = score(doc=188,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=188)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 188) [ClassicSimilarity], result of:
              0.04038954 = score(doc=188,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=188)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Bibliographic Control Division of the International Federation of Library Associations and Institutions (IFLA) consists of three sections: bibliography, cataloguing, and classification. The cataloguing section, which focuses on descriptive cataloguing, is one of the oldest within IFLA, having been founded in 1935 as the IFLA Committee on Uniform Cataloguing Rules. It became the Committee on Cataloguing in 1970. The committee played a key role in planning and convening the International Conference on Cataloguing Principles held in Paris in 1961 and the International Meeting of Cataloguing Experts held in Copenhagen in 1969. The Copenhagen conference provided the impetus to develop the International Standard Bibliographic Descriptions (ISBD). The Committee on Cataloguing established a systematic process for the revision of the ISBDs. The cataloguing section focuses on traditional cataloguing standards and on the impact of electronic resources and technology on these standards. The section has initiated several projects at the international level to facilitate access to information.
    Date
    10. 9.2000 17:38:22
  11. Kapterev, A.I.: Governing the professional and intellectual potential of a modern organization : sociologic approach (2006) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 402) [ClassicSimilarity], result of:
          0.10759281 = score(doc=402,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=402)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.04038954 = score(doc=402,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Governing the professional and intellectual potential is an interdisciplinary field of scientific research using a systematic process of developing innovation technologies for transforming individual knowledge and specialists' experience in such a way that would apply the knowledge and experience to the processes, services and products offered by an organization to reach its strategic goals. From the technological standpoint, governing the professional and intellectual potential represents modeling, forming, using and developing the corporate system of governing the professional and intellectual potential. We consider structuring knowledge using this model rather valuable during the stage of forming the governance system of professional and intellectual potential. Understanding, i.e., explicit definition of these factors, would allow for constant observation of the behavioral trends and for organizing the activity in a way conducive for influencing the favorable change of these factors. In addition, the presence of the critical management factor (CMF) system enables one to check the significance of any activity (i.e., any processes within a company) against these factors.
    Date
    11. 3.2007 14:22:28
  12. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 2419) [ClassicSimilarity], result of:
          0.10759281 = score(doc=2419,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 2419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=2419)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.04038954 = score(doc=2419,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The digital library system Daffodil is targeted at strategic support of users during the information search process. For searching, exploring and managing digital library objects it provides user-customisable information seeking patterns over a federation of heterogeneous digital libraries. In this paper evaluation results with respect to retrieval effectiveness, efficiency and user satisfaction are presented. The analysis focuses on strategic support for the scientific work-flow. Daffodil supports the whole work-flow, from data source selection over information seeking to the representation, organisation and reuse of information. By embedding high level search functionality into the scientific work-flow, the user experiences better strategic system support due to a more systematic work process. These ideas have been implemented in Daffodil followed by a qualitative evaluation. The evaluation has been conducted with 28 participants, ranging from information seeking novices to experts. The results are promising, as they support the chosen model.
    Date
    16.11.2008 16:22:48
  13. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 5589) [ClassicSimilarity], result of:
          0.10759281 = score(doc=5589,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04038954 = score(doc=5589,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  14. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.08
    0.08449805 = product of:
      0.25349414 = sum of:
        0.25349414 = sum of:
          0.15925187 = weight(_text_:indexing in 6265) [ClassicSimilarity], result of:
            0.15925187 = score(doc=6265,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.8373461 = fieldWeight in 6265, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.109375 = fieldNorm(doc=6265)
          0.09424227 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
            0.09424227 = score(doc=6265,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.5416616 = fieldWeight in 6265, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6265)
      0.33333334 = coord(1/3)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  15. Ross, J.: ¬The impact of technology on indexing (2000) 0.08
    0.078800134 = product of:
      0.2364004 = sum of:
        0.2364004 = sum of:
          0.12869495 = weight(_text_:indexing in 263) [ClassicSimilarity], result of:
            0.12869495 = score(doc=263,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.6766778 = fieldWeight in 263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.125 = fieldNorm(doc=263)
          0.10770545 = weight(_text_:22 in 263) [ClassicSimilarity], result of:
            0.10770545 = score(doc=263,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.61904186 = fieldWeight in 263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=263)
      0.33333334 = coord(1/3)
    
    Source
    Indexer. 22(2000) no.1, S.25-26
  16. Nicolaisen, J.: Citation analysis (2007) 0.08
    0.078800134 = product of:
      0.2364004 = sum of:
        0.2364004 = sum of:
          0.12869495 = weight(_text_:indexing in 6091) [ClassicSimilarity], result of:
            0.12869495 = score(doc=6091,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.6766778 = fieldWeight in 6091, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.125 = fieldNorm(doc=6091)
          0.10770545 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
            0.10770545 = score(doc=6091,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.61904186 = fieldWeight in 6091, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=6091)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:53:22
    Theme
    Citation indexing
  17. Walker, A.: Indexing commonplace books : John Locke's method (2001) 0.08
    0.078800134 = product of:
      0.2364004 = sum of:
        0.2364004 = sum of:
          0.12869495 = weight(_text_:indexing in 13) [ClassicSimilarity], result of:
            0.12869495 = score(doc=13,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.6766778 = fieldWeight in 13, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.125 = fieldNorm(doc=13)
          0.10770545 = weight(_text_:22 in 13) [ClassicSimilarity], result of:
            0.10770545 = score(doc=13,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.61904186 = fieldWeight in 13, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=13)
      0.33333334 = coord(1/3)
    
    Source
    Indexer. 22(2001) no.3, S.14-18
  18. Matthews, D.: Indexing published letters (2001) 0.08
    0.078800134 = product of:
      0.2364004 = sum of:
        0.2364004 = sum of:
          0.12869495 = weight(_text_:indexing in 4160) [ClassicSimilarity], result of:
            0.12869495 = score(doc=4160,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.6766778 = fieldWeight in 4160, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.125 = fieldNorm(doc=4160)
          0.10770545 = weight(_text_:22 in 4160) [ClassicSimilarity], result of:
            0.10770545 = score(doc=4160,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.61904186 = fieldWeight in 4160, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=4160)
      0.33333334 = coord(1/3)
    
    Source
    Indexer. 22(2001) no.3, S.135-141
  19. Survey of text mining : clustering, classification, and retrieval (2004) 0.08
    0.07873234 = product of:
      0.11809851 = sum of:
        0.08966068 = weight(_text_:systematic in 804) [ClassicSimilarity], result of:
          0.08966068 = score(doc=804,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 804) [ClassicSimilarity], result of:
              0.05687567 = score(doc=804,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 804, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  20. Leide, J.E.; Large, A.; Beheshti, J.; Brooks, M.; Cole, C.: Visualization schemes for domain novices exploring a topic space : the navigation classification scheme (2003) 0.07
    0.07317951 = product of:
      0.10976927 = sum of:
        0.08966068 = weight(_text_:systematic in 1078) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1078,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1078)
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 1078) [ClassicSimilarity], result of:
              0.04021717 = score(doc=1078,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 1078, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1078)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article and two other articles which conceptualize a future stage of the research program (Leide, Cole, Large, & Beheshti, submitted for publication; Cole, Leide, Large, Beheshti, & Brooks, in preparation), we map-out a domain novice user's encounter with an IR system from beginning to end so that appropriate classification-based visualization schemes can be inserted into the encounter process. This article describes the visualization of a navigation classification scheme only. The navigation classification scheme uses the metaphor of a ship and ship's navigator traveling through charted (but unknown to the user) waters, guided by a series of lighthouses. The lighthouses contain mediation interfaces linking the user to the information store through agents created for each. The user's agent is the cognitive model the user has of the information space, which the system encourages to evolve via interaction with the system's agent. The system's agent is an evolving classification scheme created by professional indexers to represent the structure of the information store. We propose a more systematic, multidimensional approach to creating evolving classification/indexing schemes, based on where the user is and what she is trying to do at that moment during the search session.

Languages

Types

  • a 1706
  • m 215
  • el 111
  • s 80
  • b 27
  • x 20
  • i 10
  • r 5
  • p 4
  • n 3
  • More… Less…

Themes

Subjects

Classifications