Search (118 results, page 1 of 6)

  • × type_ss:"x"
  1. Praetsch, I.: ¬Die Bereitstellung von digitalen Lehrmaterialien im Content Management System des Fachbereiches Informationswissenschaften an der Fachhochschule Potsdam exemplarisch an der Lehrveranstaltung 'Internet- und Webtechnologie' (2004) 0.07
    0.07409476 = product of:
      0.11114213 = sum of:
        0.077366516 = weight(_text_:management in 4634) [ClassicSimilarity], result of:
          0.077366516 = score(doc=4634,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.44688427 = fieldWeight in 4634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.09375 = fieldNorm(doc=4634)
        0.03377561 = product of:
          0.06755122 = sum of:
            0.06755122 = weight(_text_:system in 4634) [ClassicSimilarity], result of:
              0.06755122 = score(doc=4634,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.41757566 = fieldWeight in 4634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4634)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  2. Hinz, O.: Begriffsorientierte Bauteilverwaltung : Beispielhafte Umsetzung eines betrieblichen Teilbestandes in das Prototypverwaltungssystem IMS (Item Management System) (1997) 0.07
    0.0698572 = product of:
      0.1047858 = sum of:
        0.07294185 = weight(_text_:management in 1484) [ClassicSimilarity], result of:
          0.07294185 = score(doc=1484,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.42132655 = fieldWeight in 1484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1484)
        0.03184395 = product of:
          0.0636879 = sum of:
            0.0636879 = weight(_text_:system in 1484) [ClassicSimilarity], result of:
              0.0636879 = score(doc=1484,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3936941 = fieldWeight in 1484, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Für Industrieunternehmen stellt die Wiederverwendung von bereits im Betrieb bekannten Bauteilen eine wichtige Möglichkeit dar, Kosten zu vermeiden, so daß eine gut funktionierende Bauteilverwaltung ein Schlüssel zur Erreichung dieses Ziels ist. Der Prototype 'Item Management System' stellt einen neuen, sprachbasierten Ansatz der Bauteilverwaltung dar, die durch ein terminologisch kontrolliertes Vokabular leichter zu führen ist als durch komplizierte und unhandliche Nummernsysteme. Die Möglichkeiten zur Evaluation der Software-Ergonomie dieser Datenbank werden exemplarisch aufgezeigt
  3. El Jerroudi, F.: Inhaltliche Erschließung in Dokumenten-Management-Systemen, dargestellt am Beispiel der KRAFTWERKSSCHULE e.V (2007) 0.06
    0.05592612 = product of:
      0.08388918 = sum of:
        0.06700137 = weight(_text_:management in 527) [ClassicSimilarity], result of:
          0.06700137 = score(doc=527,freq=6.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.38701317 = fieldWeight in 527, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=527)
        0.016887804 = product of:
          0.03377561 = sum of:
            0.03377561 = weight(_text_:system in 527) [ClassicSimilarity], result of:
              0.03377561 = score(doc=527,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.20878783 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=527)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Für die Schulungsdokumente der KRAFTWERKSSCHULE e.V. werden die Möglichkeiten der inhaltlichen Erschließung analysiert. Die Erschließung soll im Dokumenten-Management-System ELOprofessional realisiert werden. In der Diplomarbeit werden die Möglichkeiten der verbalen und der klassifikatorischen Erschließung sowie die Vor- und Nachteile ihres Einsatzes und deren Eignung für die Erschließung der Schulungsunterlagen der KWS diskutiert. Ziel der Diplomarbeit ist es, Ideen zu entwickeln, wie die Recherche nach inhaltlichen Aspekten der Schulungsdokumente verbessert werden kann. Eine besondere Bedeutung kommt dabei den Bilddokumenten zu. Die Diplomarbeit beginnt mit einem Kapitel über Dokumenten-Management-Systeme, wobei besonders auf deren Recherchemöglichkeiten eingegangen wird. Im Rahmen der Diplomarbeit wurde eine Ist-Analyse durchgeführt, um die bisherige Ablage- und Recherchesituation in der Kraftwerksschule e.V. zu untersuchen. Im weiteren Verlauf der Diplomarbeit wird die Nutzung von Thesauri und Klassifikationen für die Erschließung der Schulungsdokumente diskutiert. Es werden Vorschläge und Lösungsansätze für die verbale und klassifikatorische Erschließung der Lehrmaterialien präsentiert.
  4. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.05
    0.054385222 = product of:
      0.16315566 = sum of:
        0.16315566 = product of:
          0.48946697 = sum of:
            0.48946697 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48946697 = score(doc=973,freq=2.0), product of:
                0.43545485 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051362853 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  5. Frank, C.: Entwicklung und Umsetzung eines Archivkonzeptes für ein integriertes Akten- und Medienarchiv für den Landesverband Hamburg e.V. des Deutschen Roten Kreuzes (2004) 0.05
    0.0493965 = product of:
      0.07409475 = sum of:
        0.051577676 = weight(_text_:management in 2999) [ClassicSimilarity], result of:
          0.051577676 = score(doc=2999,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 2999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2999)
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 2999) [ClassicSimilarity], result of:
              0.045034144 = score(doc=2999,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 2999, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2999)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ein Konzept wird entwickelt für ein EDV-unterstützte Erfassung unterschiedlicher Medientypen durch ein Datenbank-Management-System [DBMS]: Nach Anlass, Zielsetzung und Vorgehensweise werden Zustand des Archivs und Ziel-Konzept beschrieben, es folgen Auswahl und Einrichtung des DBMS, Nachweise zur Erfüllung der Anforderungen und Maßnahmen zur Archivorganisation.
  6. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.05
    0.049257055 = product of:
      0.07388558 = sum of:
        0.054385222 = product of:
          0.16315566 = sum of:
            0.16315566 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16315566 = score(doc=701,freq=2.0), product of:
                0.43545485 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.019500358 = product of:
          0.039000716 = sum of:
            0.039000716 = weight(_text_:system in 701) [ClassicSimilarity], result of:
              0.039000716 = score(doc=701,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.24108742 = fieldWeight in 701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  7. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.04
    0.043762505 = product of:
      0.06564376 = sum of:
        0.054385222 = product of:
          0.16315566 = sum of:
            0.16315566 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16315566 = score(doc=5820,freq=2.0), product of:
                0.43545485 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.011258536 = product of:
          0.022517072 = sum of:
            0.022517072 = weight(_text_:system in 5820) [ClassicSimilarity], result of:
              0.022517072 = score(doc=5820,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.13919188 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  8. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.04
    0.03704738 = product of:
      0.055571064 = sum of:
        0.038683258 = weight(_text_:management in 2849) [ClassicSimilarity], result of:
          0.038683258 = score(doc=2849,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 2849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2849)
        0.016887804 = product of:
          0.03377561 = sum of:
            0.03377561 = weight(_text_:system in 2849) [ClassicSimilarity], result of:
              0.03377561 = score(doc=2849,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.20878783 = fieldWeight in 2849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Natural Language Classifier helfen Unternehmen zunehmend dabei die Flut von Textdaten zu überwinden. Aber diese Classifier, einmal trainiert, verlieren mit der Zeit ihre Nützlichkeit. Sie bleiben statisch, aber die zugrundeliegende Domäne der Textdaten verändert sich: Ihre Genauigkeit nimmt aufgrund eines Phänomens ab, das als Konzeptdrift bekannt ist. Die Frage ist ob Konzeptdrift durch die Ausgabe eines Classifiers zuverlässig erkannt werden kann, und falls ja: ist es möglich dem durch nachtrainieren des Classifiers entgegenzuwirken. Es wird eine System-Implementierung mittels Proof-of-Concept vorgestellt, bei der das Konfidenzmass des Classifiers zur Erkennung von Konzeptdrift verwendet wird. Der Classifier wird dann iterativ neu trainiert, indem er Stichproben mit niedrigem Konfidenzmass auswählt, sie korrigiert und im Trainingsset der nächsten Iteration verwendet. Die Leistung des Classifiers wird über die Zeit gemessen, und die Leistung des Systems beobachtet. Basierend darauf werden schließlich Empfehlungen gegeben, die sich bei der Implementierung solcher Systeme als nützlich erweisen können.
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
  9. Jockel, S.: Entwicklung eines Kalkulationsmodells für Informationsdienstleistungen (1990) 0.03
    0.03438512 = product of:
      0.10315535 = sum of:
        0.10315535 = weight(_text_:management in 2751) [ClassicSimilarity], result of:
          0.10315535 = score(doc=2751,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.5958457 = fieldWeight in 2751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=2751)
      0.33333334 = coord(1/3)
    
    Theme
    Information Resources Management
  10. Ruhland, B.: Entwurf und Implementierung eines Relationalen Information-Retrieval-Systems über Datenbank-Management-Systeme (1991) 0.03
    0.03438512 = product of:
      0.10315535 = sum of:
        0.10315535 = weight(_text_:management in 2760) [ClassicSimilarity], result of:
          0.10315535 = score(doc=2760,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.5958457 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=2760)
      0.33333334 = coord(1/3)
    
  11. Holschbach, K.: Vorgehensweise bei der Einführung eines Dokumenten-Management-Systems : eine projektbegleitende Untersuchung (1995) 0.03
    0.03438512 = product of:
      0.10315535 = sum of:
        0.10315535 = weight(_text_:management in 5954) [ClassicSimilarity], result of:
          0.10315535 = score(doc=5954,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.5958457 = fieldWeight in 5954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=5954)
      0.33333334 = coord(1/3)
    
  12. Wagner, M.: Vorgehensweise und Problematik der Informationsbeschaffung und -auswertung bei der Erstellung einer Konkurrenzanalyse (1995) 0.03
    0.03438512 = product of:
      0.10315535 = sum of:
        0.10315535 = weight(_text_:management in 5966) [ClassicSimilarity], result of:
          0.10315535 = score(doc=5966,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.5958457 = fieldWeight in 5966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=5966)
      0.33333334 = coord(1/3)
    
    Theme
    Information Resources Management
  13. Buß, M.: Unternehmenssprache in internationalen Unternehmen : Probleme des Informationstransfers in der internen Kommunikation (2005) 0.03
    0.033088963 = product of:
      0.049633443 = sum of:
        0.032236047 = weight(_text_:management in 1482) [ClassicSimilarity], result of:
          0.032236047 = score(doc=1482,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 1482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1482)
        0.017397394 = product of:
          0.03479479 = sum of:
            0.03479479 = weight(_text_:22 in 1482) [ClassicSimilarity], result of:
              0.03479479 = score(doc=1482,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.19345059 = fieldWeight in 1482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1482)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 5.2005 18:25:26
    Theme
    Information Resources Management
  14. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.03
    0.03181964 = product of:
      0.04772946 = sum of:
        0.036470924 = weight(_text_:management in 4728) [ClassicSimilarity], result of:
          0.036470924 = score(doc=4728,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.21066327 = fieldWeight in 4728, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4728)
        0.011258536 = product of:
          0.022517072 = sum of:
            0.022517072 = weight(_text_:system in 4728) [ClassicSimilarity], result of:
              0.022517072 = score(doc=4728,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.13919188 = fieldWeight in 4728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
  15. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.03
    0.03181964 = product of:
      0.04772946 = sum of:
        0.036470924 = weight(_text_:management in 2191) [ClassicSimilarity], result of:
          0.036470924 = score(doc=2191,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.21066327 = fieldWeight in 2191, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.011258536 = product of:
          0.022517072 = sum of:
            0.022517072 = weight(_text_:system in 2191) [ClassicSimilarity], result of:
              0.022517072 = score(doc=2191,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.13919188 = fieldWeight in 2191, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  16. Castellanos Ardila, J.P.: Investigation of an OSLC-domain targeting ISO 26262 : focus on the left side of the software V-model (2016) 0.03
    0.03181964 = product of:
      0.04772946 = sum of:
        0.036470924 = weight(_text_:management in 5819) [ClassicSimilarity], result of:
          0.036470924 = score(doc=5819,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.21066327 = fieldWeight in 5819, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5819)
        0.011258536 = product of:
          0.022517072 = sum of:
            0.022517072 = weight(_text_:system in 5819) [ClassicSimilarity], result of:
              0.022517072 = score(doc=5819,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.13919188 = fieldWeight in 5819, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5819)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Industries have adopted a standardized set of practices for developing their products. In the automotive domain, the provision of safety-compliant systems is guided by ISO 26262, a standard that specifies a set of requirements and recommendations for developing automotive safety-critical systems. For being in compliance with ISO 26262, the safety lifecycle proposed by the standard must be included in the development process of a vehicle. Besides, a safety case that shows that the system is acceptably safe has to be provided. The provision of a safety case implies the execution of a precise documentation process. This process makes sure that the work products are available and traceable. Further, the documentation management is defined in the standard as a mandatory activity and guidelines are proposed/imposed for its elaboration. It would be appropriate to point out that a well-documented safety lifecycle will provide the necessary inputs for the generation of an ISO 26262-compliant safety case. The OSLC (Open Services for Lifecycle Collaboration) standard and the maturing stack of semantic web technologies represent a promising integration platform for enabling semantic interoperability between the tools involved in the safety lifecycle. Tools for requirements, architecture, development management, among others, are expected to interact and shared data with the help of domains specifications created in OSLC. This thesis proposes the creation of an OSLC tool-chain infrastructure for sharing safety-related information, where fragments of safety information can be generated. The steps carried out during the elaboration of this master thesis consist in the identification, representation, and shaping of the RDF resources needed for the creation of a safety case. The focus of the thesis is limited to a tiny portion of the ISO 26262 left-hand side of the V-model, more exactly part 6 clause 8 of the standard: Software unit design and implementation. Regardless of the use of a restricted portion of the standard during the execution of this thesis, the findings can be extended to other parts, and the conclusions can be generalize. This master thesis is considered one of the first steps towards the provision of an OSLC-based and ISO 26262-compliant methodological approach for representing and shaping the work products resulting from the execution of the safety lifecycle, documentation required in the conformation of an ISO-compliant safety case.
  17. García Barrios, V.M.: Informationsaufbereitung und Wissensorganisation in transnationalen Konzernen : Konzeption eines Informationssystems für große und geographisch verteilte Unternehmen mit dem Hyperwave Information System (2002) 0.03
    0.030872812 = product of:
      0.046309218 = sum of:
        0.032236047 = weight(_text_:management in 6000) [ClassicSimilarity], result of:
          0.032236047 = score(doc=6000,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 6000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6000)
        0.01407317 = product of:
          0.02814634 = sum of:
            0.02814634 = weight(_text_:system in 6000) [ClassicSimilarity], result of:
              0.02814634 = score(doc=6000,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.17398985 = fieldWeight in 6000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6000)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Information Resources Management
  18. Habermann, K.: Wissensrepräsentation im Rahmen von Wissensmanagement (1999) 0.03
    0.030086977 = product of:
      0.09026093 = sum of:
        0.09026093 = weight(_text_:management in 1515) [ClassicSimilarity], result of:
          0.09026093 = score(doc=1515,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.521365 = fieldWeight in 1515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=1515)
      0.33333334 = coord(1/3)
    
    Theme
    Information Resources Management
  19. Baier Benninger, P.: Model requirements for the management of electronic records (MoReq2) : Anleitung zur Umsetzung (2011) 0.03
    0.02578884 = product of:
      0.077366516 = sum of:
        0.077366516 = weight(_text_:management in 4343) [ClassicSimilarity], result of:
          0.077366516 = score(doc=4343,freq=8.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.44688427 = fieldWeight in 4343, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4343)
      0.33333334 = coord(1/3)
    
    Abstract
    Viele auch kleinere Unternehmen, Verwaltungen und Organisationen sind angesichts eines wachsenden Berges von digitalen Informationen mit dem Ordnen und Strukturieren ihrer Ablagen beschäftigt. In den meisten Organisationen besteht ein Konzept der Dokumentenlenkung. Records Management verfolgt vor allem in zwei Punkten einen weiterführenden Ansatz. Zum einen stellt es über den Geschäftsalltag hinaus den Kontext und den Entstehungszusammenhang ins Zentrum und zum anderen gibt es Regeln vor, wie mit ungenutzten oder inaktiven Dokumenten zu verfahren ist. Mit den «Model Requirements for the Management of Electronic Records» - MoReq - wurde von der europäischen Kommission ein Standard geschaffen, der alle Kernbereiche des Records Managements und damit den gesamten Entstehungs-, Nutzungs-, Archivierungsund Aussonderungsbereich von Dokumenten abdeckt. In der «Anleitung zur Umsetzung» wird die umfangreiche Anforderungsliste von MoReq2 (August 2008) zusammengefasst und durch erklärende Abschnitte ergänzt, mit dem Ziel, als griffiges Instrument bei der Einführung eines Record Management Systems zu dienen.
  20. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.03
    0.02517645 = product of:
      0.07552935 = sum of:
        0.07552935 = sum of:
          0.03377561 = weight(_text_:system in 563) [ClassicSimilarity], result of:
            0.03377561 = score(doc=563,freq=2.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.20878783 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.041753743 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.041753743 = score(doc=563,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.33333334 = coord(1/3)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47

Authors

Years

Languages

  • d 89
  • e 25
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types