Search (12 results, page 1 of 1)

  • × language_ss:"e"
  • × type_ss:"x"
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.43
    0.4329203 = product of:
      0.7936872 = sum of:
        0.15607467 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15607467 = score(doc=563,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15607467 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15607467 = score(doc=563,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15607467 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15607467 = score(doc=563,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15607467 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15607467 = score(doc=563,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15607467 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15607467 = score(doc=563,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0133138755 = product of:
          0.026627751 = sum of:
            0.026627751 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.026627751 = score(doc=563,freq=2.0), product of:
                0.11470523 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0327558 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.54545456 = coord(6/11)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.42
    0.42023253 = product of:
      0.7704263 = sum of:
        0.034683265 = product of:
          0.10404979 = sum of:
            0.10404979 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.10404979 = score(doc=5820,freq=2.0), product of:
                0.27770403 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0327558 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.14714861 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14714861 = score(doc=5820,freq=4.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14714861 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14714861 = score(doc=5820,freq=4.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14714861 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14714861 = score(doc=5820,freq=4.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14714861 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14714861 = score(doc=5820,freq=4.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14714861 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14714861 = score(doc=5820,freq=4.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.54545456 = coord(6/11)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.38
    0.37836286 = product of:
      0.6936652 = sum of:
        0.04335408 = product of:
          0.13006224 = sum of:
            0.13006224 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.13006224 = score(doc=4997,freq=2.0), product of:
                0.27770403 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0327558 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.13006224 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13006224 = score(doc=4997,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.13006224 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13006224 = score(doc=4997,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.13006224 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13006224 = score(doc=4997,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.13006224 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13006224 = score(doc=4997,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.13006224 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13006224 = score(doc=4997,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.54545456 = coord(6/11)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.30
    0.30269033 = product of:
      0.55493224 = sum of:
        0.034683265 = product of:
          0.10404979 = sum of:
            0.10404979 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.10404979 = score(doc=701,freq=2.0), product of:
                0.27770403 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0327558 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.10404979 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10404979 = score(doc=701,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10404979 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10404979 = score(doc=701,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10404979 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10404979 = score(doc=701,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10404979 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10404979 = score(doc=701,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10404979 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10404979 = score(doc=701,freq=2.0), product of:
            0.27770403 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0327558 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.54545456 = coord(6/11)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.00
    0.0022822623 = product of:
      0.025104886 = sum of:
        0.025104886 = product of:
          0.05020977 = sum of:
            0.05020977 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.05020977 = score(doc=4204,freq=4.0), product of:
                0.11470523 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0327558 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  6. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.00
    0.001986664 = product of:
      0.021853304 = sum of:
        0.021853304 = weight(_text_:internet in 2191) [ClassicSimilarity], result of:
          0.021853304 = score(doc=2191,freq=6.0), product of:
            0.09670297 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0327558 = queryNorm
            0.22598378 = fieldWeight in 2191, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
      0.09090909 = coord(1/11)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  7. Oberhauser, O.: Card-Image Public Access Catalogues (CIPACs) : a critical consideration of a cost-effective alternative to full retrospective catalogue conversion (2002) 0.00
    0.0019409178 = product of:
      0.021350095 = sum of:
        0.021350095 = weight(_text_:bibliothek in 1703) [ClassicSimilarity], result of:
          0.021350095 = score(doc=1703,freq=2.0), product of:
            0.13447993 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0327558 = queryNorm
            0.15876046 = fieldWeight in 1703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1703)
      0.09090909 = coord(1/11)
    
    Footnote
    Rez. in: ABI-Technik 21(2002) H.3, S.292 (E. Pietzsch): "Otto C. Oberhauser hat mit seiner Diplomarbeit eine beeindruckende Analyse digitalisierter Zettelkataloge (CIPACs) vorgelegt. Die Arbeit wartet mit einer Fülle von Daten und Statistiken auf, wie sie bislang nicht vorgelegen haben. BibliothekarInnen, die sich mit der Digitalisierung von Katalogen tragen, finden darin eine einzigartige Vorlage zur Entscheidungsfindung. Nach einem einführenden Kapitel bringt Oberhauser zunächst einen Überblick über eine Auswahl weltweit verfügbarer CIPACs, deren Indexierungsmethode (Binäre Suche, partielle Indexierung, Suche in OCR-Daten) und stellt vergleichende Betrachtungen über geographische Verteilung, Größe, Software, Navigation und andere Eigenschaften an. Anschließend beschreibt und analysiert er Implementierungsprobleme, beginnend bei Gründen, die zur Digitalisierung führen können: Kosten, Umsetzungsdauer, Zugriffsverbesserung, Stellplatzersparnis. Er fährt fort mit technischen Aspekten wie Scannen und Qualitätskontrolle, Image Standards, OCR, manueller Nacharbeit, Servertechnologie. Dabei geht er auch auf die eher hinderlichen Eigenschaften älterer Kataloge ein sowie auf die Präsentation im Web und die Anbindung an vorhandene Opacs. Einem wichtigen Aspekt, nämlich der Beurteilung durch die wichtigste Zielgruppe, die BibliotheksbenutzerInnen, hat Oberhauser eine eigene Feldforschung gewidmet, deren Ergebnisse er im letzten Kapitel eingehend analysiert. Anhänge über die Art der Datenerhebung und Einzelbeschreibung vieler Kataloge runden die Arbeit ab. Insgesamt kann ich die Arbeit nur als die eindrucksvollste Sammlung von Daten, Statistiken und Analysen zum Thema CIPACs bezeichnen, die mir bislang begegnet ist. Auf einen schön herausgearbeiteten Einzelaspekt, nämlich die weitgehende Zersplitterung bei den eingesetzten Softwaresystemen, will ich besonders eingehen: Derzeit können wir grob zwischen Komplettlösungen (eine beauftragte Firma führt als Generalunternehmung sämtliche Aufgaben von der Digitalisierung bis zur Ablieferung der fertigen Anwendung aus) und geteilten Lösungen (die Digitalisierung wird getrennt von der Indexierung und der Softwareerstellung vergeben bzw. im eigenen Hause vorgenommen) unterscheiden. Letztere setzen ein Projektmanagement im Hause voraus. Gerade die Softwareerstellung im eigenen Haus aber kann zu Lösungen führen, die kommerziellen Angeboten keineswegs nachstehen. Schade ist nur, daß die vielfältigen Eigenentwicklungen bislang noch nicht zu Initiativen geführt haben, die, ähnlich wie bei Public Domain Software, eine "optimale", kostengünstige und weithin akzeptierte Softwarelösung zum Ziel haben. Einige kritische Anmerkungen sollen dennoch nicht unerwähnt bleiben. Beispielsweise fehlt eine Differenzierung zwischen "Reiterkarten"-Systemen, d.h. solchen mit Indexierung jeder 20. oder 50. Karte, und Systemen mit vollständiger Indexierung sämtlicher Kartenköpfe, führt doch diese weitreichende Designentscheidung zu erheblichen Kostenverschiebungen zwischen Katalogerstellung und späterer Benutzung. Auch bei den statistischen Auswertungen der Feldforschung hätte ich mir eine feinere Differenzierung nach Typ des CIPAC oder nach Bibliothek gewünscht. So haben beispielsweise mehr als die Hälfte der befragten BenutzerInnen angegeben, die Bedienung des CIPAC sei zunächst schwer verständlich oder seine Benutzung sei zeitaufwendig gewesen. Offen beibt jedoch, ob es Unterschiede zwischen den verschiedenen Realisierungstypen gibt.
  8. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    0.001147001 = product of:
      0.012617011 = sum of:
        0.012617011 = weight(_text_:internet in 4746) [ClassicSimilarity], result of:
          0.012617011 = score(doc=4746,freq=2.0), product of:
            0.09670297 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0327558 = queryNorm
            0.1304718 = fieldWeight in 4746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
      0.09090909 = coord(1/11)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  9. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.00
    0.001147001 = product of:
      0.012617011 = sum of:
        0.012617011 = weight(_text_:internet in 4728) [ClassicSimilarity], result of:
          0.012617011 = score(doc=4728,freq=2.0), product of:
            0.09670297 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0327558 = queryNorm
            0.1304718 = fieldWeight in 4728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03125 = fieldNorm(doc=4728)
      0.09090909 = coord(1/11)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
  10. Geisriegler, E.: Enriching electronic texts with semantic metadata : a use case for the historical Newspaper Collection ANNO (Austrian Newspapers Online) of the Austrian National Libraryhek (2012) 0.00
    0.001008627 = product of:
      0.011094897 = sum of:
        0.011094897 = product of:
          0.022189794 = sum of:
            0.022189794 = weight(_text_:22 in 595) [ClassicSimilarity], result of:
              0.022189794 = score(doc=595,freq=2.0), product of:
                0.11470523 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0327558 = queryNorm
                0.19345059 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=595)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Date
    3. 2.2013 18:00:22
  11. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.00
    0.001008627 = product of:
      0.011094897 = sum of:
        0.011094897 = product of:
          0.022189794 = sum of:
            0.022189794 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.022189794 = score(doc=642,freq=2.0), product of:
                0.11470523 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0327558 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Date
    22. 7.2022 12:16:58
  12. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.00
    8.0690155E-4 = product of:
      0.008875917 = sum of:
        0.008875917 = product of:
          0.017751833 = sum of:
            0.017751833 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.017751833 = score(doc=4399,freq=2.0), product of:
                0.11470523 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0327558 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Date
    20. 1.2015 18:30:22