Search (639 results, page 32 of 32)

  • × type_ss:"x"
  1. Krämer, T.: Interoperabilität von Metadatenstandards und Dokumentretrieval (2004) 0.00
    0.0012317881 = product of:
      0.01847682 = sum of:
        0.01847682 = weight(_text_:und in 4096) [ClassicSimilarity], result of:
          0.01847682 = score(doc=4096,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.29385152 = fieldWeight in 4096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4096)
      0.06666667 = coord(1/15)
    
  2. Nicoletti, M.: Automatische Indexierung (2001) 0.00
    0.0012317881 = product of:
      0.01847682 = sum of:
        0.01847682 = weight(_text_:und in 4326) [ClassicSimilarity], result of:
          0.01847682 = score(doc=4326,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.29385152 = fieldWeight in 4326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4326)
      0.06666667 = coord(1/15)
    
    Content
    Inhalt: 1. Aufgabe - 2. Ermittlung von Mehrwortgruppen - 2.1 Definition - 3. Kennzeichnung der Mehrwortgruppen - 4. Grundformen - 5. Term- und Dokumenthäufigkeit --- Termgewichtung - 6. Steuerungsinstrument Schwellenwert - 7. Invertierter Index. Vgl. unter: http://www.grin.com/de/e-book/104966/automatische-indexierung.
  3. Dietiker, S.: Cognitive Map einer Bibliothek : eine Überprüfung der Methodentauglichkeit im Bereich Bibliothekswissenschaft - am Beispiel der Kantonsbibliothek Graubünden (2016) 0.00
    0.0011476508 = product of:
      0.01721476 = sum of:
        0.01721476 = weight(_text_:und in 4570) [ClassicSimilarity], result of:
          0.01721476 = score(doc=4570,freq=10.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.27378 = fieldWeight in 4570, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4570)
      0.06666667 = coord(1/15)
    
    Abstract
    Cognitive Maps sind mentale Karten, welche jeder Mensch besitzt. Es handelt sich dabei um eine Reflexion der jeweiligen Umwelt. Cognitive Mapping ist eine Methode, welche diese mentale Karte sichtbar macht. Aufgrund dieser Visualisierung können Erkenntnisse darüber gewonnen werden, was Menschen an einem Ort oder in einem Raum tun und wahrnehmen. Die Methode hat verschiede Anwendungstechniken, welche sich in sechs Kategorien teilen: Aufgabenlösung, Elemente orten, Sketch Map erstellen, Zonen und Gebiete einzeichnen, Weg- und Ortsbeschreibung sowie Kognitive Befragung. Anhand dieser lassen sich Untersuchungen beliebig kombinieren. Die Anwendung von Cognitive Mapping sowie eine einfache Befragung in der Kantonsbibliothek Graubünden hat ergeben, dass die Methode für Bibliotheken sinnvoll ist. Allerdings sollte bei zukünftigen Anwendungen die Punkte Gesamtaufwand, Untersuchungsaufbau, Teilnehmer-Zusammensetzung und Auswertungs-Aufwand angepasst werden.
    Content
    "Das Thema 'Cognitive Map einer Bibliothek' hat mich von Beginn an interessiert. Methoden anwenden, um den Bedürfnissen der Nutzer zu entsprechen, ist für Bibliotheken eine Möglichkeit sich auch in Zukunft als Wissensplatz zu positionieren. Das Spannende an dieser Arbeit war, sich zunächst in den vielen Anwendungsmöglichkeiten der Methode zurechtzufinden, einige davon auszuprobieren und schlussendlich herauszufinden, ob die Methode als sinnvoll für Bibliotheken bezeichnet werden kann."
  4. Lengelsen, H.: Informationsvermittlung in interdisziplinären Fächern : Fachinformation für Materialwissenschaftler in den Naturwissenschaften am Beispiel ausgewählter Internetquellen (2002) 0.00
    0.0010927627 = product of:
      0.01639144 = sum of:
        0.01639144 = product of:
          0.03278288 = sum of:
            0.03278288 = weight(_text_:internet in 858) [ClassicSimilarity], result of:
              0.03278288 = score(doc=858,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.3914154 = fieldWeight in 858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.09375 = fieldNorm(doc=858)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Internet
  5. Griesbaum, J.: Evaluierung hybrider Suchsysteme im WWW (2000) 0.00
    0.0010667598 = product of:
      0.016001396 = sum of:
        0.016001396 = weight(_text_:und in 2482) [ClassicSimilarity], result of:
          0.016001396 = score(doc=2482,freq=6.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.2544829 = fieldWeight in 2482, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
      0.06666667 = coord(1/15)
    
    Abstract
    Der Ausgangspunkt dieser Arbeit ist die Suchproblematik im World Wide Web. Suchmaschinen sind einerseits unverzichtbar für erfolgreiches Information Retrieval, andererseits wird ihnen eine mäßige Leistungsfähigkeit vorgeworfen. Das Thema dieser Arbeit ist die Untersuchung der Retrievaleffektivität deutschsprachiger Suchmaschinen. Es soll festgestellt werden, welche Retrievaleffektivität Nutzer derzeit erwarten können. Ein Ansatz, um die Retrievaleffektivität von Suchmaschinen zu erhöhen besteht darin, redaktionell von Menschen erstellte und automatisch generierte Suchergebnisse in einer Trefferliste zu vermengen. Ziel dieser Arbeit ist es, die Retrievaleffektivität solcher hybrider Systeme im Vergleich zu rein roboterbasierten Suchmaschinen zu evaluieren. Zunächst werden hierzu die grundlegenden Problembereiche bei der Evaluation von Retrievalsystemen analysiert. In Anlehnung an die von Tague-Sutcliff vorgeschlagene Methodik wird unter Beachtung der webspezifischen Besonderheiten eine mögliche Vorgehensweise erschlossen. Darauf aufbauend wird das konkrete Setting für die Durchführung der Evaluation erarbeitet und ein Retrievaleffektivitätstest bei den Suchmaschinen Lycos.de, AItaVista.de und QualiGo durchgeführt.
  6. Lepsky, K.: Maschinelle Indexierung von Titelaufnahmen zur Verbesserung der sachlichen Erschließung in Online-Publikumskatalogen (1994) 0.00
    0.0010264901 = product of:
      0.01539735 = sum of:
        0.01539735 = weight(_text_:und in 7064) [ClassicSimilarity], result of:
          0.01539735 = score(doc=7064,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.24487628 = fieldWeight in 7064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=7064)
      0.06666667 = coord(1/15)
    
    Series
    Kölner Arbeiten zum Bibliotheks- und Dokumentationswesen; H.18
  7. John, M.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken und der RSWK in einer wissenschaftlichen Spezialbibliothek (1993) 0.00
    0.0010264901 = product of:
      0.01539735 = sum of:
        0.01539735 = weight(_text_:und in 5914) [ClassicSimilarity], result of:
          0.01539735 = score(doc=5914,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.24487628 = fieldWeight in 5914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5914)
      0.06666667 = coord(1/15)
    
  8. Müller, G.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken : dargestellt am Beispiel der Zweigbibliothek der Philosophie, Ästhetik und Kulturwissenschaft der Universitätsbibliothek der Humboldt Universität zu Berlin (1993) 0.00
    0.0010264901 = product of:
      0.01539735 = sum of:
        0.01539735 = weight(_text_:und in 5917) [ClassicSimilarity], result of:
          0.01539735 = score(doc=5917,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.24487628 = fieldWeight in 5917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5917)
      0.06666667 = coord(1/15)
    
  9. Faust, L.: Variationen von Sprache : ihre Bedeutung für unser Ohr und für die Sprachtechnologie (1997) 0.00
    0.0010264901 = product of:
      0.01539735 = sum of:
        0.01539735 = weight(_text_:und in 3452) [ClassicSimilarity], result of:
          0.01539735 = score(doc=3452,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.24487628 = fieldWeight in 3452, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3452)
      0.06666667 = coord(1/15)
    
  10. Schröder, T.A.: Parlament und Information : Die Geschichte der Parlamentsdokumentation in Deutschland (1998) 0.00
    0.0010264901 = product of:
      0.01539735 = sum of:
        0.01539735 = weight(_text_:und in 4018) [ClassicSimilarity], result of:
          0.01539735 = score(doc=4018,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.24487628 = fieldWeight in 4018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4018)
      0.06666667 = coord(1/15)
    
  11. Hinz, O.: Begriffsorientierte Bauteilverwaltung : Beispielhafte Umsetzung eines betrieblichen Teilbestandes in das Prototypverwaltungssystem IMS (Item Management System) (1997) 0.00
    8.2119205E-4 = product of:
      0.01231788 = sum of:
        0.01231788 = weight(_text_:und in 1484) [ClassicSimilarity], result of:
          0.01231788 = score(doc=1484,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.19590102 = fieldWeight in 1484, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1484)
      0.06666667 = coord(1/15)
    
    Abstract
    Für Industrieunternehmen stellt die Wiederverwendung von bereits im Betrieb bekannten Bauteilen eine wichtige Möglichkeit dar, Kosten zu vermeiden, so daß eine gut funktionierende Bauteilverwaltung ein Schlüssel zur Erreichung dieses Ziels ist. Der Prototype 'Item Management System' stellt einen neuen, sprachbasierten Ansatz der Bauteilverwaltung dar, die durch ein terminologisch kontrolliertes Vokabular leichter zu führen ist als durch komplizierte und unhandliche Nummernsysteme. Die Möglichkeiten zur Evaluation der Software-Ergonomie dieser Datenbank werden exemplarisch aufgezeigt
  12. Jezior, T.: Adaption und Integration von Suchmaschinentechnologie in mor(!)dernen OPACs (2013) 0.00
    8.2119205E-4 = product of:
      0.01231788 = sum of:
        0.01231788 = weight(_text_:und in 2222) [ClassicSimilarity], result of:
          0.01231788 = score(doc=2222,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.19590102 = fieldWeight in 2222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2222)
      0.06666667 = coord(1/15)
    
  13. Bickmann, H.-J.: Synonymie und Sprachverwendung : Verfahren zur Ermittlung von Synonymenklassen als kontextbeschränkten Äquivalenzklassen (1978) 0.00
    8.2119205E-4 = product of:
      0.01231788 = sum of:
        0.01231788 = weight(_text_:und in 5890) [ClassicSimilarity], result of:
          0.01231788 = score(doc=5890,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.19590102 = fieldWeight in 5890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=5890)
      0.06666667 = coord(1/15)
    
  14. Karlova-Bourbonus, N.: Automatic detection of contradictions in texts (2018) 0.00
    6.799078E-4 = product of:
      0.010198616 = sum of:
        0.010198616 = weight(_text_:des in 5976) [ClassicSimilarity], result of:
          0.010198616 = score(doc=5976,freq=4.0), product of:
            0.07856494 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.028369885 = queryNorm
            0.12981129 = fieldWeight in 5976, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5976)
      0.06666667 = coord(1/15)
    
    Content
    Inaugural-Dissertation zur Erlangung des Doktorgrades der Philosophie des Fachbereiches 05 - Sprache, Literatur, Kultur der Justus-Liebig-Universität Gießen. Vgl. unter: https://core.ac.uk/download/pdf/196294796.pdf.
  15. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.00
    6.406213E-4 = product of:
      0.009609319 = sum of:
        0.009609319 = product of:
          0.019218639 = sum of:
            0.019218639 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.019218639 = score(doc=642,freq=2.0), product of:
                0.0993465 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028369885 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    22. 7.2022 12:16:58
  16. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.00
    6.309069E-4 = product of:
      0.009463603 = sum of:
        0.009463603 = product of:
          0.018927205 = sum of:
            0.018927205 = weight(_text_:internet in 2191) [ClassicSimilarity], result of:
              0.018927205 = score(doc=2191,freq=6.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.22598378 = fieldWeight in 2191, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  17. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.00
    5.1249703E-4 = product of:
      0.0076874555 = sum of:
        0.0076874555 = product of:
          0.015374911 = sum of:
            0.015374911 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.015374911 = score(doc=4399,freq=2.0), product of:
                0.0993465 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028369885 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    20. 1.2015 18:30:22
  18. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    3.6425426E-4 = product of:
      0.0054638134 = sum of:
        0.0054638134 = product of:
          0.010927627 = sum of:
            0.010927627 = weight(_text_:internet in 4746) [ClassicSimilarity], result of:
              0.010927627 = score(doc=4746,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.1304718 = fieldWeight in 4746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  19. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.00
    3.6425426E-4 = product of:
      0.0054638134 = sum of:
        0.0054638134 = product of:
          0.010927627 = sum of:
            0.010927627 = weight(_text_:internet in 4728) [ClassicSimilarity], result of:
              0.010927627 = score(doc=4728,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.1304718 = fieldWeight in 4728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.

Years

Languages

  • d 610
  • e 21
  • f 3
  • a 1
  • hu 1
  • More… Less…

Types

Themes

Subjects

Classifications