Search (332 results, page 1 of 17)

  • × type_ss:"el"
  1. Dousa, T.: Everything Old is New Again : Perspectivism and Polyhierarchy in Julius O. Kaiser's Theory of Systematic Indexing (2007) 0.18
    0.17810974 = product of:
      0.2671646 = sum of:
        0.20959367 = weight(_text_:systematic in 4835) [ClassicSimilarity], result of:
          0.20959367 = score(doc=4835,freq=8.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.6314765 = fieldWeight in 4835, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4835)
        0.057570927 = product of:
          0.11514185 = sum of:
            0.11514185 = weight(_text_:indexing in 4835) [ClassicSimilarity], result of:
              0.11514185 = score(doc=4835,freq=12.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.51797354 = fieldWeight in 4835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the early years of the 20th century, Julius Otto Kaiser (1868-1927), a special librarian and indexer of technical literature, developed a method of knowledge organization (KO) known as systematic indexing. Certain elements of the method-its stipulation that all indexing terms be divided into fundamental categories "concretes", "countries", and "processes", which are then to be synthesized into indexing "statements" formulated according to strict rules of citation order-have long been recognized as precursors to key principles of the theory of faceted classification. However, other, less well-known elements of the method may prove no less interesting to practitioners of KO. In particular, two aspects of systematic indexing seem to prefigure current trends in KO: (1) a perspectivist outlook that rejects universal classifications in favor of information organization systems customized to reflect local needs and (2) the incorporation of index terms extracted from source documents into a polyhierarchical taxonomical structure. Kaiser's perspectivism anticipates postmodern theories of KO, while his principled use of polyhierarchy to organize terms derived from the language of source documents provides a potentially fruitful model that can inform current discussions about harvesting natural-language terms, such as tags, and incorporating them into a flexibly structured controlled vocabulary.
    Object
    Kaiser systematic indexing
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.13
    0.13382004 = product of:
      0.20073006 = sum of:
        0.15372358 = product of:
          0.46117073 = sum of:
            0.46117073 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.46117073 = score(doc=1826,freq=2.0), product of:
                0.4923373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05807226 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.047006465 = product of:
          0.09401293 = sum of:
            0.09401293 = weight(_text_:indexing in 1826) [ClassicSimilarity], result of:
              0.09401293 = score(doc=1826,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.42292362 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.10
    0.09700376 = product of:
      0.14550564 = sum of:
        0.104796834 = weight(_text_:systematic in 658) [ClassicSimilarity], result of:
          0.104796834 = score(doc=658,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.31573826 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=658)
        0.040708795 = product of:
          0.08141759 = sum of:
            0.08141759 = weight(_text_:indexing in 658) [ClassicSimilarity], result of:
              0.08141759 = score(doc=658,freq=6.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.3662626 = fieldWeight in 658, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  4. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.09
    0.09157778 = product of:
      0.13736667 = sum of:
        0.118564084 = weight(_text_:systematic in 2547) [ClassicSimilarity], result of:
          0.118564084 = score(doc=2547,freq=4.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.35721707 = fieldWeight in 2547, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
        0.018802587 = product of:
          0.037605174 = sum of:
            0.037605174 = weight(_text_:indexing in 2547) [ClassicSimilarity], result of:
              0.037605174 = score(doc=2547,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.16916946 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  5. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.08
    0.08392089 = product of:
      0.12588133 = sum of:
        0.083837464 = weight(_text_:systematic in 1669) [ClassicSimilarity], result of:
          0.083837464 = score(doc=1669,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.2525906 = fieldWeight in 1669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
        0.042043865 = product of:
          0.08408773 = sum of:
            0.08408773 = weight(_text_:indexing in 1669) [ClassicSimilarity], result of:
              0.08408773 = score(doc=1669,freq=10.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.3782744 = fieldWeight in 1669, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  6. Koch, T.: Searching the Web : systematic overview over indexes (1995) 0.08
    0.083837464 = product of:
      0.25151238 = sum of:
        0.25151238 = weight(_text_:systematic in 3169) [ClassicSimilarity], result of:
          0.25151238 = score(doc=3169,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.7577718 = fieldWeight in 3169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.09375 = fieldNorm(doc=3169)
      0.33333334 = coord(1/3)
    
  7. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.07
    0.068427704 = product of:
      0.2052831 = sum of:
        0.2052831 = sum of:
          0.09401293 = weight(_text_:indexing in 3925) [ClassicSimilarity], result of:
            0.09401293 = score(doc=3925,freq=2.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.42292362 = fieldWeight in 3925, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
          0.11127018 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
            0.11127018 = score(doc=3925,freq=4.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.54716086 = fieldWeight in 3925, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3925)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:22:28
    Theme
    Citation indexing
  8. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.06
    0.064404026 = product of:
      0.19321206 = sum of:
        0.19321206 = sum of:
          0.13026814 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.13026814 = score(doc=1149,freq=6.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.06294392 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.06294392 = score(doc=1149,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  9. Thaller, M.: From the digitized to the digital library (2001) 0.06
    0.059007697 = product of:
      0.08851154 = sum of:
        0.025633443 = product of:
          0.076900326 = sum of:
            0.076900326 = weight(_text_:objects in 1159) [ClassicSimilarity], result of:
              0.076900326 = score(doc=1159,freq=4.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.24914396 = fieldWeight in 1159, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1159)
          0.33333334 = coord(1/3)
        0.062878095 = weight(_text_:systematic in 1159) [ClassicSimilarity], result of:
          0.062878095 = score(doc=1159,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.18944295 = fieldWeight in 1159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.6666667 = coord(2/3)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
  10. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.06
    0.05756428 = product of:
      0.17269284 = sum of:
        0.17269284 = sum of:
          0.09401293 = weight(_text_:indexing in 5865) [ClassicSimilarity], result of:
            0.09401293 = score(doc=5865,freq=2.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.42292362 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
          0.078679904 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
            0.078679904 = score(doc=5865,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.38690117 = fieldWeight in 5865, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5865)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  11. Understanding metadata (2004) 0.05
    0.053204563 = product of:
      0.07980684 = sum of:
        0.04833488 = product of:
          0.14500464 = sum of:
            0.14500464 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
              0.14500464 = score(doc=2686,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.46979034 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.33333334 = coord(1/3)
        0.03147196 = product of:
          0.06294392 = sum of:
            0.06294392 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.06294392 = score(doc=2686,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  12. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.05
    0.049276274 = product of:
      0.07391441 = sum of:
        0.02416744 = product of:
          0.07250232 = sum of:
            0.07250232 = weight(_text_:objects in 3109) [ClassicSimilarity], result of:
              0.07250232 = score(doc=3109,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.23489517 = fieldWeight in 3109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.33333334 = coord(1/3)
        0.049746968 = product of:
          0.099493936 = sum of:
            0.099493936 = weight(_text_:indexing in 3109) [ClassicSimilarity], result of:
              0.099493936 = score(doc=3109,freq=14.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.4475803 = fieldWeight in 3109, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  13. Wyllie, J.; Eaton, S.: Faceted classification as an intelligence analysis tool (2007) 0.05
    0.048905186 = product of:
      0.14671555 = sum of:
        0.14671555 = weight(_text_:systematic in 716) [ClassicSimilarity], result of:
          0.14671555 = score(doc=716,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=716)
      0.33333334 = coord(1/3)
    
    Abstract
    Jan and Simon are collaborating in the development of a collaborative web-based resource to be called The Energy Centre (TEC). TEC will allow the collaborative collection of clips relating to all aspects of the energy sector. The clips will be stored and organized in such a way that they are not only easily searchable, but can serve as the basis for content analysis - defined as 'a technique for systematic inference from communications'. Jan began by explaining that it was while working as an intelligence analyst at the Canadian Trend Report in Montreal, that he learned about content analysis, a classic taxonomy-based intelligence research methodology
  14. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.05
    0.048905186 = product of:
      0.14671555 = sum of:
        0.14671555 = weight(_text_:systematic in 855) [ClassicSimilarity], result of:
          0.14671555 = score(doc=855,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
      0.33333334 = coord(1/3)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
  15. Tramullas, J.: Temas y métodos de investigación en Ciencia de la Información, 2000-2019 : Revisión bibliográfica (2020) 0.05
    0.048905186 = product of:
      0.14671555 = sum of:
        0.14671555 = weight(_text_:systematic in 5929) [ClassicSimilarity], result of:
          0.14671555 = score(doc=5929,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 5929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
      0.33333334 = coord(1/3)
    
    Abstract
    A systematic literature review is carried out, detailing the research topics and the methods and techniques used in information science in studies published between 2000 and 2019. The results obtained allow us to affirm that there is no consensus on the core topics of information science, as these evolve and change dynamically in relation to other disciplines, and with the dominant social and cultural contexts. With regard to the research methods and techniques, it can be stated that they have mostly been adopted from social sciences, with the addition of numerical methods, especially in the fields of bibliometric and scientometric research.
  16. Priss, U.: Faceted knowledge representation (1999) 0.05
    0.04655399 = product of:
      0.069830984 = sum of:
        0.042293023 = product of:
          0.12687907 = sum of:
            0.12687907 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
              0.12687907 = score(doc=2654,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.41106653 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.33333334 = coord(1/3)
        0.027537964 = product of:
          0.05507593 = sum of:
            0.05507593 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.05507593 = score(doc=2654,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  17. Electronic Dewey (1993) 0.05
    0.046051424 = product of:
      0.13815427 = sum of:
        0.13815427 = sum of:
          0.07521035 = weight(_text_:indexing in 1088) [ClassicSimilarity], result of:
            0.07521035 = score(doc=1088,freq=2.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.3383389 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
          0.06294392 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.06294392 = score(doc=1088,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.30952093 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
      0.33333334 = coord(1/3)
    
    Abstract
    The CD-ROM version of the 20th DDC ed., featuring advanced online search and windowing techniques, full-text indexing, personal notepad, LC subject headings linked to DDC numbers and a database of all DDC changes
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  18. Networked knowledge organization systems (2001) 0.04
    0.041918732 = product of:
      0.12575619 = sum of:
        0.12575619 = weight(_text_:systematic in 6473) [ClassicSimilarity], result of:
          0.12575619 = score(doc=6473,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.3788859 = fieldWeight in 6473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
  19. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.04
    0.041918732 = product of:
      0.12575619 = sum of:
        0.12575619 = weight(_text_:systematic in 5365) [ClassicSimilarity], result of:
          0.12575619 = score(doc=5365,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.3788859 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  20. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.04
    0.040992957 = product of:
      0.122978866 = sum of:
        0.122978866 = product of:
          0.3689366 = sum of:
            0.3689366 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.3689366 = score(doc=230,freq=2.0), product of:
                0.4923373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05807226 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H

Years

Languages

  • e 230
  • d 90
  • a 3
  • el 2
  • f 1
  • i 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 164
  • i 10
  • s 7
  • m 6
  • r 5
  • p 4
  • x 3
  • b 2
  • n 1
  • More… Less…