Search (280 results, page 1 of 14)

  • × type_ss:"el"
  1. Dousa, T.: Everything Old is New Again : Perspectivism and Polyhierarchy in Julius O. Kaiser's Theory of Systematic Indexing (2007) 0.15
    0.15238476 = product of:
      0.22857714 = sum of:
        0.17932136 = weight(_text_:systematic in 4835) [ClassicSimilarity], result of:
          0.17932136 = score(doc=4835,freq=8.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.6314765 = fieldWeight in 4835, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4835)
        0.049255773 = product of:
          0.09851155 = sum of:
            0.09851155 = weight(_text_:indexing in 4835) [ClassicSimilarity], result of:
              0.09851155 = score(doc=4835,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.51797354 = fieldWeight in 4835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the early years of the 20th century, Julius Otto Kaiser (1868-1927), a special librarian and indexer of technical literature, developed a method of knowledge organization (KO) known as systematic indexing. Certain elements of the method-its stipulation that all indexing terms be divided into fundamental categories "concretes", "countries", and "processes", which are then to be synthesized into indexing "statements" formulated according to strict rules of citation order-have long been recognized as precursors to key principles of the theory of faceted classification. However, other, less well-known elements of the method may prove no less interesting to practitioners of KO. In particular, two aspects of systematic indexing seem to prefigure current trends in KO: (1) a perspectivist outlook that rejects universal classifications in favor of information organization systems customized to reflect local needs and (2) the incorporation of index terms extracted from source documents into a polyhierarchical taxonomical structure. Kaiser's perspectivism anticipates postmodern theories of KO, while his principled use of polyhierarchy to organize terms derived from the language of source documents provides a potentially fruitful model that can inform current discussions about harvesting natural-language terms, such as tags, and incorporating them into a flexibly structured controlled vocabulary.
    Object
    Kaiser systematic indexing
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.11
    0.11449196 = product of:
      0.17173794 = sum of:
        0.13152078 = product of:
          0.39456233 = sum of:
            0.39456233 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39456233 = score(doc=1826,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.04021717 = product of:
          0.08043434 = sum of:
            0.08043434 = weight(_text_:indexing in 1826) [ClassicSimilarity], result of:
              0.08043434 = score(doc=1826,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.42292362 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.08
    0.08299318 = product of:
      0.12448977 = sum of:
        0.08966068 = weight(_text_:systematic in 658) [ClassicSimilarity], result of:
          0.08966068 = score(doc=658,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=658)
        0.03482909 = product of:
          0.06965818 = sum of:
            0.06965818 = weight(_text_:indexing in 658) [ClassicSimilarity], result of:
              0.06965818 = score(doc=658,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3662626 = fieldWeight in 658, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  4. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.08
    0.0783509 = product of:
      0.11752635 = sum of:
        0.10143948 = weight(_text_:systematic in 2547) [ClassicSimilarity], result of:
          0.10143948 = score(doc=2547,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.35721707 = fieldWeight in 2547, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
        0.016086869 = product of:
          0.032173738 = sum of:
            0.032173738 = weight(_text_:indexing in 2547) [ClassicSimilarity], result of:
              0.032173738 = score(doc=2547,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.16916946 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  5. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.07
    0.07179992 = product of:
      0.10769987 = sum of:
        0.07172854 = weight(_text_:systematic in 1669) [ClassicSimilarity], result of:
          0.07172854 = score(doc=1669,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.2525906 = fieldWeight in 1669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
        0.035971332 = product of:
          0.071942665 = sum of:
            0.071942665 = weight(_text_:indexing in 1669) [ClassicSimilarity], result of:
              0.071942665 = score(doc=1669,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3782744 = fieldWeight in 1669, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  6. Koch, T.: Searching the Web : systematic overview over indexes (1995) 0.07
    0.07172854 = product of:
      0.21518563 = sum of:
        0.21518563 = weight(_text_:systematic in 3169) [ClassicSimilarity], result of:
          0.21518563 = score(doc=3169,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.7577718 = fieldWeight in 3169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.09375 = fieldNorm(doc=3169)
      0.33333334 = coord(1/3)
    
  7. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.06
    0.058544468 = product of:
      0.1756334 = sum of:
        0.1756334 = sum of:
          0.08043434 = weight(_text_:indexing in 3925) [ClassicSimilarity], result of:
            0.08043434 = score(doc=3925,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.42292362 = fieldWeight in 3925, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
          0.09519906 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
            0.09519906 = score(doc=3925,freq=4.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.54716086 = fieldWeight in 3925, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:22:28
    Theme
    Citation indexing
  8. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.06
    0.055101942 = product of:
      0.16530582 = sum of:
        0.16530582 = sum of:
          0.11145309 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.11145309 = score(doc=1149,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.053852726 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.053852726 = score(doc=1149,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  9. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.05
    0.04925008 = product of:
      0.14775024 = sum of:
        0.14775024 = sum of:
          0.08043434 = weight(_text_:indexing in 5865) [ClassicSimilarity], result of:
            0.08043434 = score(doc=5865,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.42292362 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.06731591 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.06731591 = score(doc=5865,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  10. Wyllie, J.; Eaton, S.: Faceted classification as an intelligence analysis tool (2007) 0.04
    0.041841652 = product of:
      0.12552495 = sum of:
        0.12552495 = weight(_text_:systematic in 716) [ClassicSimilarity], result of:
          0.12552495 = score(doc=716,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=716)
      0.33333334 = coord(1/3)
    
    Abstract
    Jan and Simon are collaborating in the development of a collaborative web-based resource to be called The Energy Centre (TEC). TEC will allow the collaborative collection of clips relating to all aspects of the energy sector. The clips will be stored and organized in such a way that they are not only easily searchable, but can serve as the basis for content analysis - defined as 'a technique for systematic inference from communications'. Jan began by explaining that it was while working as an intelligence analyst at the Canadian Trend Report in Montreal, that he learned about content analysis, a classic taxonomy-based intelligence research methodology
  11. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.04
    0.041841652 = product of:
      0.12552495 = sum of:
        0.12552495 = weight(_text_:systematic in 855) [ClassicSimilarity], result of:
          0.12552495 = score(doc=855,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
      0.33333334 = coord(1/3)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
  12. Tramullas, J.: Temas y métodos de investigación en Ciencia de la Información, 2000-2019 : Revisión bibliográfica (2020) 0.04
    0.041841652 = product of:
      0.12552495 = sum of:
        0.12552495 = weight(_text_:systematic in 5929) [ClassicSimilarity], result of:
          0.12552495 = score(doc=5929,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 5929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
      0.33333334 = coord(1/3)
    
    Abstract
    A systematic literature review is carried out, detailing the research topics and the methods and techniques used in information science in studies published between 2000 and 2019. The results obtained allow us to affirm that there is no consensus on the core topics of information science, as these evolve and change dynamically in relation to other disciplines, and with the dominant social and cultural contexts. With regard to the research methods and techniques, it can be stated that they have mostly been adopted from social sciences, with the addition of numerical methods, especially in the fields of bibliometric and scientometric research.
  13. Electronic Dewey (1993) 0.04
    0.039400067 = product of:
      0.1182002 = sum of:
        0.1182002 = sum of:
          0.064347476 = weight(_text_:indexing in 1088) [ClassicSimilarity], result of:
            0.064347476 = score(doc=1088,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.3383389 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
          0.053852726 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.053852726 = score(doc=1088,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
      0.33333334 = coord(1/3)
    
    Abstract
    The CD-ROM version of the 20th DDC ed., featuring advanced online search and windowing techniques, full-text indexing, personal notepad, LC subject headings linked to DDC numbers and a database of all DDC changes
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  14. Networked knowledge organization systems (2001) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 6473) [ClassicSimilarity], result of:
          0.10759281 = score(doc=6473,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 6473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
  15. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 5365) [ClassicSimilarity], result of:
          0.10759281 = score(doc=5365,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  16. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.04
    0.035072207 = product of:
      0.10521662 = sum of:
        0.10521662 = product of:
          0.31564987 = sum of:
            0.31564987 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.31564987 = score(doc=230,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  17. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.03447506 = product of:
      0.103425175 = sum of:
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 40) [ClassicSimilarity], result of:
            0.05630404 = score(doc=40,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.047121134 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.047121134 = score(doc=40,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.33333334 = coord(1/3)
    
    Date
    17.11.2020 12:22:59
    Theme
    Citation indexing
  18. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 4837) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4837,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.33333334 = coord(1/3)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  19. Mixter, J.; Childress, E.R.: FAST (Faceted Application of Subject Terminology) users : summary and case studies (2013) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 2011) [ClassicSimilarity], result of:
          0.08966068 = score(doc=2011,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 2011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the past ten years, various organizations, both public and private, have expressed interest in implementing FAST in their cataloging workflows. As interest in FAST has grown, so too has interest in knowing how FAST is being used and by whom. Since 2002 eighteen institutions (see table 1) in six countries have expressed interest in learning more about FAST and how it could be implemented in cataloging workflows. Currently OCLC is aware of nine agencies that have actually adopted or support FAST for resource description. This study, the first systematic census of FAST users undertaken by OCLC, was conducted, in part, to address these inquiries. Its purpose was to examine: how FAST is being utilized; why FAST was chosen as the cataloging vocabulary; what benefits FAST provides; and what can be done to enhance the value of FAST. Interview requests were sent to all parties that had previously contacted OCLC about FAST. Of the eighteen organizations contacted, sixteen agreed to provide information about their decision whether to use FAST (nine adopters, seven non-adopters).
  20. Cecchini, C.; Zanchetta, C.; Paolo Borin, P.; Xausa, G.: Computational design e sistemi di classificazione per la verifica predittiva delle prestazioni di sistema degli organismi edilizi : Computational design and classification systems to support predictive checking of performance of building systems (2017) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5856) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5856,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5856)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of control the economic, social and environmental aspects connected to the construction of a building imposes a systematic approach for which t is necessary to make test models aimed to a coordinate analysis of different and independent performance issues. BIM technology, referring to interoperable informative models, offers a significant operative basis to achieve this necessity. In most of the cases, informative models concentrate on a product-based digital models collection built in a virtual space, more than on the simulation of their relational behaviors. This relation, instead, is the most important aspect of modelling because it marks and characterizes the interactions that can define the building as a system. This study presents the use of standard classification systems as tools for both the activation and validation of an integrated performance-based building process. By referring categories and types of the informative model to the codes of a technological and performance-based classification system, it is possible to link and coordinate functional units and their elements with the indications required by the AEC standards. In this way, progressing with an incremental logic, it is possible to achieve the management of the requirements of the whole building and the monitoring of the fulfilment of design objectives and specific normative guidelines.

Authors

Years

Languages

  • e 179
  • d 90
  • a 3
  • el 2
  • i 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 128
  • i 10
  • m 6
  • p 4
  • r 4
  • s 4
  • x 3
  • b 2
  • n 1
  • More… Less…