Search (15 results, page 1 of 1)

  • × theme_ss:"Dokumentenmanagement"
  • × year_i:[2000 TO 2010}
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.04
    0.041441064 = product of:
      0.08288213 = sum of:
        0.08288213 = product of:
          0.24864638 = sum of:
            0.24864638 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.24864638 = score(doc=2918,freq=2.0), product of:
                0.4424171 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052184064 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  2. Hesselbarth, A.: What you see is all you get? : Konzept zur Optimierung des Bildmanagements am Beispiel der jump Fotoagentur (2008) 0.03
    0.03128866 = product of:
      0.06257732 = sum of:
        0.06257732 = sum of:
          0.027226217 = weight(_text_:systems in 1938) [ClassicSimilarity], result of:
            0.027226217 = score(doc=1938,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.1697705 = fieldWeight in 1938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1938)
          0.0353511 = weight(_text_:22 in 1938) [ClassicSimilarity], result of:
            0.0353511 = score(doc=1938,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 1938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1938)
      0.5 = coord(1/2)
    
    Abstract
    Thema dieser Arbeit ist eine Untersuchung des digitalen Bilderhandels. Es wird die Akzeptanz und die Nutzung von Bilddatenbanken in der Bildbranche analysiert, mit dem Ziel, ein Optimierungskonzept für die Bilddatenbank der jump Fotoagentur zu entwerfen. Zur Einführung werden die Grundlagen der Bildbranche erläutert und auf die Beteiligten und ihre Aufgabenbereiche eingegangen. Es folgt eine Darstellung der Entwicklung der Digitalisierung und den dadurch verursachten Wandel des Bildermarktes. Im Anschluss werden die Möglichkeiten des Bildmanagements und deren Zusammenhang mit der Bildvermarktung aufgezeigt. Weiterhin wird das Bildmanagement-System der jump Fotoagentur näher beschrieben. Mit Hilfe der gewonnen Ergebnisse aus der durchgeführten Befragung wird ein Konzept zur Verbesserung dieses Systems erstellt. Die Erkenntnisse werden zusammengefasst und ein Ausblick auf die Zukunft des digitalen Bilderhandels gegeben.
    Date
    22. 6.2008 17:34:12
  3. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.03
    0.025030928 = product of:
      0.050061855 = sum of:
        0.050061855 = sum of:
          0.021780973 = weight(_text_:systems in 1507) [ClassicSimilarity], result of:
            0.021780973 = score(doc=1507,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.1358164 = fieldWeight in 1507, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
          0.028280882 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
            0.028280882 = score(doc=1507,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.15476047 = fieldWeight in 1507, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1507)
      0.5 = coord(1/2)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  4. Vasudevan, M.C.; Mohan, M.; Kapoor, A.: Information system for knowledge management in the specialized division of a hospital (2006) 0.01
    0.013476291 = product of:
      0.026952581 = sum of:
        0.026952581 = product of:
          0.053905163 = sum of:
            0.053905163 = weight(_text_:systems in 1499) [ClassicSimilarity], result of:
              0.053905163 = score(doc=1499,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.33612844 = fieldWeight in 1499, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information systems are essential support for knowledge management in all types of enterprises. This paper describes the evolution and development of a specialized hospital information system. The system is designed to integrate for access and retrieval from databases of patients' case records, and related images - CATSCAN, MRI, X-Ray - and to enable online access to full text of relevant papers on the Internet/WWW. The generation of information products and services from the system is briefly described.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
  5. Meer, K. van der: Document information systems (2009) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 3771) [ClassicSimilarity], result of:
              0.04620442 = score(doc=3771,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 3771, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    From characteristics of information, documents and document information systems (document IS), motives to use document IS are described. A few cases are presented. The functional aspects of document IS are described, derived from ISO standard 15489 on records management and the Sarbanes-Oxley law, and made operational in MoReq and DoD standard 5015.2. Explicit attention is given to related subjects from a viewpoint of document management: information sharing (workflow, knowledge management), and interoperability of Information and Communication Technology (ICT) tools; authenticity because of the possible evidential value of documents; and digital longevity because of the possible long-time function of archival documents. The technical aspects answer functional demands; important information science standards, and standard components for 12 characteristics of document IS are described, among others ODMA, the XML family, OAIS, and metadata schemes. The design methodological aspects answer functional demands and technical possibilities. Models are introduced and the way of working of, e.g., a digitization project is described.
  6. Murthy, S.S.: ¬The National Tuberculosis Institute, Bangalore : recent development in library and information services (2006) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 1502) [ClassicSimilarity], result of:
              0.043561947 = score(doc=1502,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 1502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
  7. Dalipi, B.: Dokumenten-Management und Verwendung von Metadaten bei einem Energieversorgungsunternehmen (2008) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 3405) [ClassicSimilarity], result of:
              0.043561947 = score(doc=3405,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 3405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3405)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die vorliegende Arbeit befasst sich im ersten Teil mit den theoretischen Grundlagen, Aufgaben und Funktionen eines Dokumentenmanagement-Systems. In diesem Zusammenhang erklärt werden auch Metadaten, Indexierung, Thesaurus und die bekanntesten Klassifikationssysteme. Erwähnt werden auch das Semantic Web und dessen Technologien. Die Schritte der Realisierung einer SQL-Datenbank, in welcher die Metadaten abgebildet werden, werden in den praktischen Teil der Arbeit näher beschrieben.
  8. Lam-Adesina, A.M.; Jones, G.J.F.: Examining and improving the effectiveness of relevance feedback for retrieval of scanned text documents (2006) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 977) [ClassicSimilarity], result of:
              0.038503684 = score(doc=977,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 977, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=977)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Important legacy paper documents are digitized and collected in online accessible archives. This enables the preservation, sharing, and significantly the searching of these documents. The text contents of these document images can be transcribed automatically using OCR systems and then stored in an information retrieval system. However, OCR systems make errors in character recognition which have previously been shown to impact on document retrieval behaviour. In particular relevance feedback query-expansion methods, which are often effective for improving electronic text retrieval, are observed to be less reliable for retrieval of scanned document images. Our experimental examination of the effects of character recognition errors on an ad hoc OCR retrieval task demonstrates that, while baseline information retrieval can remain relatively unaffected by transcription errors, relevance feedback via query expansion becomes highly unstable. This paper examines the reason for this behaviour, and introduces novel modifications to standard relevance feedback methods. These methods are shown experimentally to improve the effectiveness of relevance feedback for errorful OCR transcriptions. The new methods combine similar recognised character strings based on term collection frequency and a string edit-distance measure. The techniques are domain independent and make no use of external resources such as dictionaries or training data.
  9. Toebak, P.: ¬Das Dossier nicht die Klassifikation als Herzstück des Records Management (2009) 0.01
    0.008837775 = product of:
      0.01767555 = sum of:
        0.01767555 = product of:
          0.0353511 = sum of:
            0.0353511 = weight(_text_:22 in 3220) [ClassicSimilarity], result of:
              0.0353511 = score(doc=3220,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19345059 = fieldWeight in 3220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3220)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6.12.2009 17:22:17
  10. Myburgh, S.: Records organization and access (2009) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 3868) [ClassicSimilarity], result of:
              0.03267146 = score(doc=3868,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 3868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Records, as documents which provide evidence of business transactions that have taken place, are collected and preserved for as long as they are useful to the organization, or as is demanded by law. In order to be useful, however, they must be organized in such a way that they can easily be identified, located, accessed, and used, for whatever purpose. First, the records must be described by identifying the most useful salient characteristics; then, they are categorized in various ways, according to their age, function, level of confidentiality, privacy and security, and access to them controlled according to these categories. Records may be arranged by one of several ordinal systems, usually involving letters and numbers, but also color: these symbolically represent the characteristics that are considered as important descriptors. Thus, records can be accessed (or protected from access) by their category; they can be located by correspondence between terms (which may be words or numbers) used to describe characteristics and terms used in searching for particular records or records series. These principles apply to both physical and virtual records.
  11. Großmann, K.; Schaaf, T.: Datenbankbasiertes Dokumentenmanagementsystem im Versuchswesen (2001) 0.01
    0.0070702205 = product of:
      0.014140441 = sum of:
        0.014140441 = product of:
          0.028280882 = sum of:
            0.028280882 = weight(_text_:22 in 5868) [ClassicSimilarity], result of:
              0.028280882 = score(doc=5868,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.15476047 = fieldWeight in 5868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 5.2001 12:22:23
  12. Wandeler, J.: Comprenez-vous only Bahnhof? : Mehrsprachigkeit in der Mediendokumentation (2003) 0.01
    0.0070702205 = product of:
      0.014140441 = sum of:
        0.014140441 = product of:
          0.028280882 = sum of:
            0.028280882 = weight(_text_:22 in 1512) [ClassicSimilarity], result of:
              0.028280882 = score(doc=1512,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.15476047 = fieldWeight in 1512, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1512)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 12:09:10
  13. Dahmen, E.: Klassifikation als Ordnundssystem im elektronischen Pressearchiv (2003) 0.01
    0.0070702205 = product of:
      0.014140441 = sum of:
        0.014140441 = product of:
          0.028280882 = sum of:
            0.028280882 = weight(_text_:22 in 1513) [ClassicSimilarity], result of:
              0.028280882 = score(doc=1513,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.15476047 = fieldWeight in 1513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1513)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 4.2003 13:35:22
  14. Schlenkrich, C.: Aspekte neuer Regelwerksarbeit : Multimediales Datenmodell für ARD und ZDF (2003) 0.01
    0.0070702205 = product of:
      0.014140441 = sum of:
        0.014140441 = product of:
          0.028280882 = sum of:
            0.028280882 = weight(_text_:22 in 1515) [ClassicSimilarity], result of:
              0.028280882 = score(doc=1515,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.15476047 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 12:05:56
  15. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.0053026653 = product of:
      0.010605331 = sum of:
        0.010605331 = product of:
          0.021210661 = sum of:
            0.021210661 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.021210661 = score(doc=1833,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2008 19:49:22