Search (60 results, page 1 of 3)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Dokumentenmanagement"
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.06
    0.060603306 = sum of:
      0.054862697 = product of:
        0.21945079 = sum of:
          0.21945079 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
            0.21945079 = score(doc=2918,freq=2.0), product of:
              0.39046928 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046056706 = queryNorm
              0.56201804 = fieldWeight in 2918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
        0.25 = coord(1/4)
      0.005740611 = product of:
        0.011481222 = sum of:
          0.011481222 = weight(_text_:a in 2918) [ClassicSimilarity], result of:
            0.011481222 = score(doc=2918,freq=16.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.2161963 = fieldWeight in 2918, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
        0.5 = coord(1/2)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Type
    a
  2. Hesselbarth, A.: What you see is all you get? : Konzept zur Optimierung des Bildmanagements am Beispiel der jump Fotoagentur (2008) 0.02
    0.017291464 = product of:
      0.034582928 = sum of:
        0.034582928 = sum of:
          0.0033826875 = weight(_text_:a in 1938) [ClassicSimilarity], result of:
            0.0033826875 = score(doc=1938,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.06369744 = fieldWeight in 1938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1938)
          0.03120024 = weight(_text_:22 in 1938) [ClassicSimilarity], result of:
            0.03120024 = score(doc=1938,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 1938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1938)
      0.5 = coord(1/2)
    
    Date
    22. 6.2008 17:34:12
  3. Toebak, P.: ¬Das Dossier nicht die Klassifikation als Herzstück des Records Management (2009) 0.02
    0.017291464 = product of:
      0.034582928 = sum of:
        0.034582928 = sum of:
          0.0033826875 = weight(_text_:a in 3220) [ClassicSimilarity], result of:
            0.0033826875 = score(doc=3220,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.06369744 = fieldWeight in 3220, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3220)
          0.03120024 = weight(_text_:22 in 3220) [ClassicSimilarity], result of:
            0.03120024 = score(doc=3220,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 3220, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3220)
      0.5 = coord(1/2)
    
    Date
    6.12.2009 17:22:17
    Type
    a
  4. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.01482369 = product of:
      0.02964738 = sum of:
        0.02964738 = sum of:
          0.0046871896 = weight(_text_:a in 1507) [ClassicSimilarity], result of:
            0.0046871896 = score(doc=1507,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.088261776 = fieldWeight in 1507, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
          0.02496019 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1507,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1507, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
      0.5 = coord(1/2)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
    Type
    a
  5. Dahmen, E.: Klassifikation als Ordnundssystem im elektronischen Pressearchiv (2003) 0.01
    0.01482369 = product of:
      0.02964738 = sum of:
        0.02964738 = sum of:
          0.0046871896 = weight(_text_:a in 1513) [ClassicSimilarity], result of:
            0.0046871896 = score(doc=1513,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.088261776 = fieldWeight in 1513, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1513)
          0.02496019 = weight(_text_:22 in 1513) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1513,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1513, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1513)
      0.5 = coord(1/2)
    
    Abstract
    Die bis heute gültige Grundkonzeption einer Klassifikation für die elektronische Pressedatenbank im WDR wurde seit 1989 von Klaus Leesch und Mitarbeitern entwickelt. Die inhaltliche Ausgestaltung und Strukturierung erfolgte durch verschiedene Mitarbeiter des Pressearchivs. Mit Beginn der Digitalisierung 1993 kam die erste Klassifikation ("PARIS-Klassifikation") zum Einsatz, sie wurde in den folgenden Jahren von Dr. Bernhard Brandhofer mehrmals überarbeitet und hin zu einer archivübergreifenden Klassifikation ("D&A-Klassifikation") erweitert. Seit August 1999 ist diese Klassifikation die Grundlage der inhaltlichen Erschließung für die kooperierenden ARD-Pressearchive. Die letzte Überarbeitung fand 2000/2001 in der AG Erschließung des PAN (Presse-Archiv-Netzwerk der ARD) in Zusammenarbeit von Mitarbeitern des NDR, SWR und WDR statt und kommt ab Mai 2001 zum Einsatz (PAN-Klassifikation).
    Date
    28. 4.2003 13:35:22
    Object
    D&A-Klassifikation
    Type
    a
  6. Großmann, K.; Schaaf, T.: Datenbankbasiertes Dokumentenmanagementsystem im Versuchswesen (2001) 0.01
    0.014393632 = product of:
      0.028787265 = sum of:
        0.028787265 = sum of:
          0.003827074 = weight(_text_:a in 5868) [ClassicSimilarity], result of:
            0.003827074 = score(doc=5868,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.072065435 = fieldWeight in 5868, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=5868)
          0.02496019 = weight(_text_:22 in 5868) [ClassicSimilarity], result of:
            0.02496019 = score(doc=5868,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 5868, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5868)
      0.5 = coord(1/2)
    
    Abstract
    Die Agrarproduktion wird in sehr komplexer Weise durch einen steten Wandel ihrer ökono-mischen und ökologischen Rahmenbedingungen beeinflusst. Aus diesem Prozess resultieren ständig neue Anforderungen an die Agrarforschung. Bei den Forschungs- und Untersuchungsarbeiten in der Pflanzen- und Tierproduktion, im Gartenbau und im Forstwesen nimmt dabei das Experiment eine zentrale Stelle ein. Der derzeitige Stand der Dokumentation und Präsentation von Ergebnissen der Versuchstätigkeit ist gekennzeichnet durch: - Die Existenz einer Vielzahl dezentraler Pools von Versuchsberichten, - hohe Aufwendungen für deren Publizierung, in der Regel im Selbstverlag, - steigende Versandkosten, - relativ kleiner Adressatenkreis, - nur punktuell webbasierte, statische Präsentationen, - kein umfassender Austausch und damit transparente Präsentation von Versuchsresultaten, - keine strukturiert unterstützte (Datenbank-)Recherche nach bestimmten Berichten/Dokumenten/Einrichtungen/Versuchskategorien usw. Das Projekt >Versuchsberichte im Internet< (VIP) soll für Einrichtungen der Beratung, Forschung, Lehre, Praxis und Verwaltung im Agrarbereich helfen, diese Mängel zu mindern und so einen Rationalisierungseffekt auslösen. Dieses Ziel soll im Einzelnen wie folgt realisiert werden: - Input der als verteilte Informationspools in Bund und Ländern vorliegenden Versuchsberichte in eine zentrale Dokumentendatenbank bei uneingeschränkter Verfügungsgewalt und vollen Urheberrechten der beteiligten Einrichtungen; - Bereitstellung einer Online-Lösung im Internet; - Integration eines Moduls für den internetgestützten Online-Input von Dokumenten; - Gewährleistung von Datenschutz; - Unterstützung des Versuchswesens in Bund und Ländern mit dem Ziel, Rationalisierungseffekte z. B. hinsichtlich der Versuchsplanung, Dokumentations- und Kommunikationsaufwendungen, Öffentlichkeitsarbeit, e-commerce zu erreichen. Über diese Funktionen hinaus werden in das Projekt weitere Informationspools, wie Adressen, bibliographische und wissenschaftliche Informationen, Diskussionslisten, Maps u. a. integriert. Entsprechend der föderalen Struktur der Bundesrepublik steht eine Beteiligung am Projekt allen interessierten Einrichtungen in Bund und Ländern offen
    Date
    16. 5.2001 12:22:23
    Type
    a
  7. Wandeler, J.: Comprenez-vous only Bahnhof? : Mehrsprachigkeit in der Mediendokumentation (2003) 0.01
    0.01383317 = product of:
      0.02766634 = sum of:
        0.02766634 = sum of:
          0.00270615 = weight(_text_:a in 1512) [ClassicSimilarity], result of:
            0.00270615 = score(doc=1512,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.050957955 = fieldWeight in 1512, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1512)
          0.02496019 = weight(_text_:22 in 1512) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1512,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1512, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1512)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 12:09:10
    Type
    a
  8. Schlenkrich, C.: Aspekte neuer Regelwerksarbeit : Multimediales Datenmodell für ARD und ZDF (2003) 0.01
    0.01383317 = product of:
      0.02766634 = sum of:
        0.02766634 = sum of:
          0.00270615 = weight(_text_:a in 1515) [ClassicSimilarity], result of:
            0.00270615 = score(doc=1515,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.050957955 = fieldWeight in 1515, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1515)
          0.02496019 = weight(_text_:22 in 1515) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1515,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1515, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1515)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 12:05:56
    Type
    a
  9. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.00
    0.0046800356 = product of:
      0.009360071 = sum of:
        0.009360071 = product of:
          0.018720143 = sum of:
            0.018720143 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.018720143 = score(doc=1833,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2008 19:49:22
  10. Bantin, P.: Electronic records management : a review of the work of a decade and a reflection on future directions (2002) 0.00
    0.0040592253 = product of:
      0.008118451 = sum of:
        0.008118451 = product of:
          0.016236901 = sum of:
            0.016236901 = weight(_text_:a in 4255) [ClassicSimilarity], result of:
              0.016236901 = score(doc=4255,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.30574775 = fieldWeight in 4255, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Frohmann, B.: Revisiting "what is a document?" (2009) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 2837) [ClassicSimilarity], result of:
              0.011219106 = score(doc=2837,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 2837, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to provide a reconsideration of Michael Buckland's important question, "What is a document?", analysing the point and purpose of definitions of "document" and "documentation". Design/methodology/approach - Two philosophical notions of the point of definitions are contrasted: John Stuart Mill's concept of a "real" definition, purporting to specify the nature of the definiendum; and a concept of definition based upon a foundationalist philosophy of language. Both conceptions assume that a general, philosophical justification for using words as we do is always in order. This assumption is criticized by deploying Hilary Putnam's arguments against the orthodox Wittgensteinian interpretation of criteria governing the use of language. The example of the cabinets of curiosities of the sixteenth-century English and European virtuosi is developed to show how one might productively think about what documents might be, but without a definition of a document. Findings - Other than for specific, instrumentalist purposes (often appropriate for specific case studies), there is no general philosophical reason for asking, what is a document? There are good reasons for pursuing studies of documentation without the impediments of definitions of "document" or "documentation". Originality/value - The paper makes an original contribution to the new interest in documentation studies by providing conceptual resources for multiplying, rather than restricting, the areas of application of the concepts of documents and documentation.
    Type
    a
  12. Mas, S.; Zaher, L'H.; Zacklad, M.: Design & evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 2480) [ClassicSimilarity], result of:
              0.0108246 = score(doc=2480,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 2480, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This communication describes part of a current research carried out at the Université de Technologie de Troyes and funded by a postdoctoral grant from the Fonds québécois de la recherche sur la société et la culture. Under the title "Design and evaluation of a faceted classification for uniform and personal organization of administrative electronic documents", our research investigates the feasibility of creating a faceted and multi-points-of-view classification scheme for administrative document organization and retrieval in online environments.
  13. Vasudevan, M.C.; Mohan, M.; Kapoor, A.: Information system for knowledge management in the specialized division of a hospital (2006) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 1499) [ClassicSimilarity], result of:
              0.010589487 = score(doc=1499,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 1499, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information systems are essential support for knowledge management in all types of enterprises. This paper describes the evolution and development of a specialized hospital information system. The system is designed to integrate for access and retrieval from databases of patients' case records, and related images - CATSCAN, MRI, X-Ray - and to enable online access to full text of relevant papers on the Internet/WWW. The generation of information products and services from the system is briefly described.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Type
    a
  14. Trinkwalder, A.: Wortdetektive : Volltext-Suchmaschinen für Festplatte und Intranet (2000) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 5318) [ClassicSimilarity], result of:
              0.009567685 = score(doc=5318,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 5318, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5318)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  15. Mas, S.; Zaher, L'H.; Zacklad, M.: Design and evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 2256) [ClassicSimilarity], result of:
              0.009471525 = score(doc=2256,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 2256, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    This paper describes part of a current research investigating the feasibility of creating a faceted and multi-viewed knowledge organization system (KOS) for administrative document organization in online environments. Preliminary findings support the faceted and multi-viewed classification as a promising altemative to the hierarchical paradigm for personal administrative electronic documents organization. Further analysis about identification of semantic relations between facets to reduce number of facet descriptors is required. Technical improvements are also needed to enhance the faceted navigation interface used within the pilot test.
    Type
    a
  16. Batley, S.: ¬The I in information architecture : the challenge of content management (2007) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 809) [ClassicSimilarity], result of:
              0.009374379 = score(doc=809,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 809, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to provide a review of content management in the context of information architecture. Design/methodology/approach - The method adopted is a review of definitions of information architecture and an analysis of the importance of content and its management within information architecture. Findings - Concludes that reality will not necessarily match the vision of organisations investing in information architecture. Originality/value - The paper considers practical issues around content and records management.
    Type
    a
  17. Murthy, S.S.: ¬The National Tuberculosis Institute, Bangalore : recent development in library and information services (2006) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 1502) [ClassicSimilarity], result of:
              0.009374379 = score(doc=1502,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 1502, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Briefly describes the information products and services, the related databases, development of digital library and web-resources and web-based services, vocabulary control tools, networking, and other projects of the Library of the National Tuberculosis Institute (NTI), Bangalore. Acknowledges the involvement of and advice and assistance provided by Prof. A. Neelameghan to these programmes and projects.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Type
    a
  18. Hare, C.E.; McLeod, J.: How to manage records in the e-environment : 2nd ed. (2006) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 1749) [ClassicSimilarity], result of:
              0.008202582 = score(doc=1749,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 1749, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1749)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A practical approach to developing and operating an effective programme to manage hybrid records within an organization. This title positions records management as an integral business function linked to the organisation's business aims and objectives. The authors also address the records requirements of new and significant pieces of legislation, such as data protection and freedom of information, as well as exploring strategies for managing electronic records. Bullet points, checklists and examples assist the reader throughout, making this a one-stop resource for information in this area.
    Footnote
    1. Aufl. u.d.T.: Developing a records management programme
  19. Salminen, A.: Modeling documents in their context (2009) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 3847) [ClassicSimilarity], result of:
              0.008202582 = score(doc=3847,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 3847, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3847)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This entry describes notions and methods for analyzing and modeling documents in an organizational context. A model for the analysis process is provided and methods for data gathering, modeling, and user needs analysis described. The methods have been originally developed and tested during document standardization activities carried out in the Finnish Parliament and ministries. Later the methods have been adopted and adapted in other Finnish organizations in their document management development projects. The methods are intended especially for cases where the goal is to develop an Extensible Markup Language (XML)-based solution for document management. This entry emphasizes the importance of analyzing and describing documents in their organizational context.
    Type
    a
  20. Mattig-Fabian, N.; Bourdeille, S.C. de: ¬Die SAT1-Presselounge im Internet : Aktuell produzieren leicht gemacht (2003) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 3550) [ClassicSimilarity], result of:
              0.008118451 = score(doc=3550,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 3550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3550)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages

  • d 44
  • e 16

Types

  • a 54
  • m 4
  • el 1
  • s 1
  • x 1
  • More… Less…