Search (7253 results, page 1 of 363)

  • × language_ss:"e"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.18
    0.18344909 = product of:
      0.36689818 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.18601346 = score(doc=2514,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.26306275 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.26306275 = score(doc=2514,freq=4.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.041830957 = product of:
          0.083661914 = sum of:
            0.083661914 = weight(_text_:methods in 2514) [ClassicSimilarity], result of:
              0.083661914 = score(doc=2514,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.53303653 = fieldWeight in 2514, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.16
    0.16239655 = product of:
      0.3247931 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.18601346 = score(doc=2918,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 2918) [ClassicSimilarity], result of:
            0.044751476 = score(doc=2918,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 2918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
          0.032023653 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
            0.032023653 = score(doc=2918,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 2918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
        0.18601346 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18601346 = score(doc=2918,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5 = coord(3/6)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.13879846 = product of:
      0.27759692 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18601346 = score(doc=400,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18601346 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18601346 = score(doc=400,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.029578956 = product of:
          0.05915791 = sum of:
            0.05915791 = weight(_text_:methods in 400) [ClassicSimilarity], result of:
              0.05915791 = score(doc=400,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.37691376 = fieldWeight in 400, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.14
    0.13778776 = product of:
      0.41336328 = sum of:
        0.10334082 = product of:
          0.31002244 = sum of:
            0.31002244 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.31002244 = score(doc=1826,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.31002244 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31002244 = score(doc=1826,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.33333334 = coord(2/6)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.13
    0.13194287 = product of:
      0.26388574 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18601346 = score(doc=562,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18601346 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18601346 = score(doc=562,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03173558 = score(doc=562,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  6. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.12
    0.124188185 = product of:
      0.24837637 = sum of:
        0.05167041 = product of:
          0.15501122 = sum of:
            0.15501122 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.15501122 = score(doc=76,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.04169473 = product of:
          0.08338946 = sum of:
            0.08338946 = weight(_text_:theory in 76) [ClassicSimilarity], result of:
              0.08338946 = score(doc=76,freq=10.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5136716 = fieldWeight in 76, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.5 = coord(1/2)
        0.15501122 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.15501122 = score(doc=76,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.5 = coord(3/6)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  7. Tufte, E.R.: ¬The visual display of quantitative information (1983) 0.12
    0.12020608 = product of:
      0.24041216 = sum of:
        0.018680464 = product of:
          0.03736093 = sum of:
            0.03736093 = weight(_text_:29 in 3734) [ClassicSimilarity], result of:
              0.03736093 = score(doc=3734,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27205724 = fieldWeight in 3734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3734)
          0.5 = coord(1/2)
        0.18722291 = weight(_text_:graphic in 3734) [ClassicSimilarity], result of:
          0.18722291 = score(doc=3734,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.72424996 = fieldWeight in 3734, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3734)
        0.034508783 = product of:
          0.06901757 = sum of:
            0.06901757 = weight(_text_:methods in 3734) [ClassicSimilarity], result of:
              0.06901757 = score(doc=3734,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.43973273 = fieldWeight in 3734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3734)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Date
    19. 7.2017 13:13:29
    LCSH
    Statistics / Graphic methods
    Subject
    Statistics / Graphic methods
  8. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.12
    0.11849709 = product of:
      0.35549125 = sum of:
        0.22694844 = weight(_text_:graphic in 3355) [ClassicSimilarity], result of:
          0.22694844 = score(doc=3355,freq=8.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.8779235 = fieldWeight in 3355, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.12854281 = sum of:
          0.083661914 = weight(_text_:methods in 3355) [ClassicSimilarity], result of:
            0.083661914 = score(doc=3355,freq=8.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.53303653 = fieldWeight in 3355, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.044880893 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.044880893 = score(doc=3355,freq=4.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.33333334 = coord(2/6)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Statistics / Graphic methods
    Science / Study and teaching / Graphic methods
    Subject
    Statistics / Graphic methods
    Science / Study and teaching / Graphic methods
  9. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.11
    0.11023021 = product of:
      0.33069062 = sum of:
        0.082672656 = product of:
          0.24801797 = sum of:
            0.24801797 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24801797 = score(doc=230,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24801797 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24801797 = score(doc=230,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  10. Beebe, C.; Jacob, E.K.: Graphic language documents : structures and functions (1998) 0.10
    0.09729727 = product of:
      0.2918918 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 43) [ClassicSimilarity], result of:
              0.059668638 = score(doc=43,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 43, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=43)
          0.5 = coord(1/2)
        0.26205748 = weight(_text_:graphic in 43) [ClassicSimilarity], result of:
          0.26205748 = score(doc=43,freq=6.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            1.0137388 = fieldWeight in 43, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0625 = fieldNorm(doc=43)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper proposes to explore the nature of graphic language documents from the contrasting perspectives of structure and function -- from the perspectives of the document's structure as a spatially-oriented object. Using design principles derived from Gestalt theory and the Bauhaus concept that form (or structure) follows function, the paper addresses the relationship that exists between structure and function in the broad domain of graphic language documents
  11. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.10
    0.09645145 = product of:
      0.28935432 = sum of:
        0.07233858 = product of:
          0.21701573 = sum of:
            0.21701573 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.21701573 = score(doc=306,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.21701573 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21701573 = score(doc=306,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  12. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.09
    0.09253231 = product of:
      0.18506461 = sum of:
        0.041336328 = product of:
          0.12400898 = sum of:
            0.12400898 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12400898 = score(doc=701,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12400898 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12400898 = score(doc=701,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.019719303 = product of:
          0.039438605 = sum of:
            0.039438605 = weight(_text_:methods in 701) [ClassicSimilarity], result of:
              0.039438605 = score(doc=701,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.25127584 = fieldWeight in 701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  13. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.08652668 = product of:
      0.25958002 = sum of:
        0.18601346 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18601346 = score(doc=563,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.07356654 = sum of:
          0.041830957 = weight(_text_:methods in 563) [ClassicSimilarity], result of:
            0.041830957 = score(doc=563,freq=2.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.26651827 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.03173558 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.03173558 = score(doc=563,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.33333334 = coord(2/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  14. Morse, P.M.: Browsing and search theory (1973) 0.09
    0.08647302 = product of:
      0.25941905 = sum of:
        0.2223942 = sum of:
          0.14767234 = weight(_text_:theory in 3339) [ClassicSimilarity], result of:
            0.14767234 = score(doc=3339,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.90964836 = fieldWeight in 3339, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.109375 = fieldNorm(doc=3339)
          0.07472186 = weight(_text_:29 in 3339) [ClassicSimilarity], result of:
            0.07472186 = score(doc=3339,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.5441145 = fieldWeight in 3339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.109375 = fieldNorm(doc=3339)
        0.037024844 = product of:
          0.07404969 = sum of:
            0.07404969 = weight(_text_:22 in 3339) [ClassicSimilarity], result of:
              0.07404969 = score(doc=3339,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5416616 = fieldWeight in 3339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3339)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 5.2005 19:52:29
    Source
    Toward a theory of librarianship. Papers in honor of J.H. Shera. Ed. by H. Rawski
  15. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.08
    0.082672656 = product of:
      0.24801795 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18601346 = score(doc=862,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18601346 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18601346 = score(doc=862,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  16. Chomsky, N.: Aspects of the theory of syntax (1965) 0.08
    0.08234927 = product of:
      0.24704781 = sum of:
        0.2047337 = sum of:
          0.119337276 = weight(_text_:theory in 3829) [ClassicSimilarity], result of:
            0.119337276 = score(doc=3829,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.7351069 = fieldWeight in 3829, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.125 = fieldNorm(doc=3829)
          0.08539642 = weight(_text_:29 in 3829) [ClassicSimilarity], result of:
            0.08539642 = score(doc=3829,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.6218451 = fieldWeight in 3829, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.125 = fieldNorm(doc=3829)
        0.04231411 = product of:
          0.08462822 = sum of:
            0.08462822 = weight(_text_:22 in 3829) [ClassicSimilarity], result of:
              0.08462822 = score(doc=3829,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.61904186 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    6. 1.1999 10:29:22
  17. Trentin, G.: Graphic tools for knowledge representation and informal problem-based learning in professional online communities (2007) 0.08
    0.08225169 = product of:
      0.16450338 = sum of:
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 1463) [ClassicSimilarity], result of:
              0.02668638 = score(doc=1463,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
        0.13373064 = weight(_text_:graphic in 1463) [ClassicSimilarity], result of:
          0.13373064 = score(doc=1463,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.51732135 = fieldWeight in 1463, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1463)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 1463) [ClassicSimilarity], result of:
              0.034859132 = score(doc=1463,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The use of graphical representations is very common in information technology and engineering. Although these same tools could be applied effectively in other areas, they are not used because they are hardly known or are completely unheard of. This article aims to discuss the results of the experimentation carried out on graphical approaches to knowledge representation during research, analysis and problem-solving in the health care sector. The experimentation was carried out on conceptual mapping and Petri Nets, developed collaboratively online with the aid of the CMapTool and WoPeD graphic applications. Two distinct professional communities have been involved in the research, both pertaining to the Local Health Units in Tuscany. One community is made up of head physicians and health care managers whilst the other is formed by technical staff from the Department of Nutrition and Food Hygiene. It emerged from the experimentation that concept maps arc considered more effective in analyzing knowledge domain related to the problem to be faced (description of what it is). On the other hand, Petri Nets arc more effective in studying and formalizing its possible solutions (description of what to do to). For the same reason, those involved in the experimentation have proposed the complementary rather than alternative use of the two knowledge representation methods as a support for professional problem-solving.
    Date
    28. 2.2008 14:16:29
  18. Hughes, A.V.; Rafferty, P.: Inter-indexer consistency in graphic materials indexing at the National Library of Wales (2011) 0.08
    0.07669772 = product of:
      0.23009317 = sum of:
        0.018646449 = product of:
          0.037292898 = sum of:
            0.037292898 = weight(_text_:theory in 4488) [ClassicSimilarity], result of:
              0.037292898 = score(doc=4488,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2297209 = fieldWeight in 4488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4488)
          0.5 = coord(1/2)
        0.21144672 = weight(_text_:graphic in 4488) [ClassicSimilarity], result of:
          0.21144672 = score(doc=4488,freq=10.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.8179569 = fieldWeight in 4488, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4488)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper seeks to report a project to investigate the degree of inter-indexer consistency in the assignment of controlled vocabulary topical subject index terms to identical graphical images by different indexers at the National Library of Wales (NLW). Design/methodology/approach - An experimental quantitative methodology was devised to investigate inter-indexer consistency. Additionally, the project investigated the relationship, if any, between indexing exhaustivity and consistency, and the relationship, if any, between indexing consistency/exhaustivity and broad category of graphic format. Findings - Inter-indexer consistency in the assignment of topical subject index terms to graphic materials at the NLW was found to be generally low and highly variable. Inter-indexer consistency fell within the range 10.8 per cent to 48.0 per cent. Indexing exhaustivity varied substantially from indexer to indexer, with a mean assignment of 3.8 terms by each indexer to each image, falling within the range 2.5 to 4.7 terms. The broad category of graphic format, whether photographic or non-photographic, was found to have little influence on either inter-indexer consistency or indexing exhaustivity. Indexing exhaustivity and inter-indexer consistency exhibited a tendency toward a direct, positive relationship. The findings are necessarily limited as this is a small-scale study within a single institution. Originality/value - Previous consistency studies have almost exclusively investigated the indexing of print materials, with very little research published for non-print media. With the literature also rich in discussion of the added complexities of subjectively representing the intellectual content of visual media, this study attempts to enrich existing knowledge on indexing consistency for graphic materials and to address a noticeable gap in information theory.
  19. Buckland, M.: Document theory (2018) 0.07
    0.07471367 = product of:
      0.224141 = sum of:
        0.036918085 = product of:
          0.07383617 = sum of:
            0.07383617 = weight(_text_:theory in 4536) [ClassicSimilarity], result of:
              0.07383617 = score(doc=4536,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.45482418 = fieldWeight in 4536, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4536)
          0.5 = coord(1/2)
        0.18722291 = weight(_text_:graphic in 4536) [ClassicSimilarity], result of:
          0.18722291 = score(doc=4536,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.72424996 = fieldWeight in 4536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4536)
      0.33333334 = coord(2/6)
    
    Abstract
    Document theory examines the concept of a document and how it can serve with other concepts to understand communication, documentation, information, and knowledge. Knowledge organization itself is in practice based on the arrangement of documents representing concepts and knowledge. The word "document" commonly refers to a text or graphic record, but, in a semiotic perspective, non-graphic objects can also be regarded as signifying and, therefore, as documents. The steady increase in the variety and number of documents since prehistoric times enables the development of communities, the division of labor, and reduction of the constraints of space and time. Documents are related to data, facts, texts, works, information, knowledge, signs, and other documents. Documents have physical (material), cognitive, and social aspects.
  20. Greenberg, J.: Intellectual control of visual archives : a comparison between the Art and Architecture Thesaurus and the Library of Congress Thesaurus for Graphic Materials (1993) 0.07
    0.07297295 = product of:
      0.21891886 = sum of:
        0.022375738 = product of:
          0.044751476 = sum of:
            0.044751476 = weight(_text_:theory in 546) [ClassicSimilarity], result of:
              0.044751476 = score(doc=546,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27566507 = fieldWeight in 546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=546)
          0.5 = coord(1/2)
        0.19654313 = weight(_text_:graphic in 546) [ClassicSimilarity], result of:
          0.19654313 = score(doc=546,freq=6.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.7603041 = fieldWeight in 546, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=546)
      0.33333334 = coord(2/6)
    
    Abstract
    The following investigation is a comparison between the Art and Architecture Thesaurus (AAT) and the LC Thesaurus for Graphic Materials (LCTGM), two popular sources for providing subject access to visual archives. The analysis begins with a discussion on the nature of visual archives and the employment of archival control theory to graphic materials. The major difference observed is that the AAT is a faceted structure geared towards a specialized audience of art and architecture researchers, while LCTGM is similar to LCSH in structure and aims to service the wide-spread archival community. The conclusion recognizes the need to understand the differences between subject thesauri and subject heading lists, and the pressing need to investigate and understand intellectual control of visual archives in today's automated environment.

Authors

Languages

Types

  • a 6317
  • m 581
  • s 296
  • el 283
  • b 41
  • r 35
  • i 26
  • x 23
  • p 7
  • n 6
  • ? 5
  • d 5
  • h 1
  • More… Less…

Themes

Subjects

Classifications