Search (110 results, page 1 of 6)

  • × theme_ss:"Dokumentenmanagement"
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.06
    0.064269386 = product of:
      0.10711564 = sum of:
        0.067530274 = product of:
          0.20259081 = sum of:
            0.20259081 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.20259081 = score(doc=2918,freq=2.0), product of:
                0.3604703 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04251826 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.027959513 = weight(_text_:system in 2918) [ClassicSimilarity], result of:
          0.027959513 = score(doc=2918,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
              0.034877572 = score(doc=2918,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Source
    System Sciences, 2009. HICSS '09. 42nd Hawaii International Conference
  2. Krizak, J.D.: Hospital documentation planning : the concept and the context (1993) 0.06
    0.05760758 = product of:
      0.14401895 = sum of:
        0.09129799 = weight(_text_:context in 2149) [ClassicSimilarity], result of:
          0.09129799 = score(doc=2149,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.51808125 = fieldWeight in 2149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0625 = fieldNorm(doc=2149)
        0.05272096 = weight(_text_:system in 2149) [ClassicSimilarity], result of:
          0.05272096 = score(doc=2149,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 2149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2149)
      0.4 = coord(2/5)
    
    Abstract
    Documentation planning is defined as a process within an institution to select an appropriate documentary record for the institution. Describes the functions and component institutions of the US health care system, identifies the functions of hospitals within the system, offers an analysis of hospital activities and administrative organization, and presents a typology of hospitals. Provides the informational context within which a documentation plan can be developed for a practical hospital. A similar planning approach may also be applied to other types of institutions, organizations and corporations
  3. Dale, T.: Selecting an indexing scheme (1996) 0.04
    0.043616004 = product of:
      0.10904001 = sum of:
        0.07176066 = weight(_text_:index in 3347) [ClassicSimilarity], result of:
          0.07176066 = score(doc=3347,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 3347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=3347)
        0.03727935 = weight(_text_:system in 3347) [ClassicSimilarity], result of:
          0.03727935 = score(doc=3347,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 3347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3347)
      0.4 = coord(2/5)
    
    Abstract
    Discusses issues underlying indexing for records management. Examines: how terms that will be used to retrieve a document are selected; how many index terms should be used to ensure retrieval, the unit of information to be indexed; whether or not the system shoul be able to retrieve tha page that contains the information requested or whether it is sufficient to be able to retrieve the document that includes that page; and how to deal with long documents
  4. Yorke, S.: Strategies for the records manager (1997) 0.04
    0.035642873 = product of:
      0.089107186 = sum of:
        0.05648775 = weight(_text_:context in 3339) [ClassicSimilarity], result of:
          0.05648775 = score(doc=3339,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3339)
        0.03261943 = weight(_text_:system in 3339) [ClassicSimilarity], result of:
          0.03261943 = score(doc=3339,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3339)
      0.4 = coord(2/5)
    
    Abstract
    Discusses some of the options for approaches to disaster management in organizations as it relates to risk management. At the organisation or business level, the starting point is to identify the risks faced by the business area and functional activities. The Australian / New Zealand Standard on managing rsik proposes a 6 step process for carrying out risk management: identification of the organisational context, risk identification, risk analysis, assessment amd prioritisation of risks, treatment of risks and the monitoring and review of the system. Such strategies can be of limited value in a wide area disaster. Offers advice on coping with a major disaster
  5. Roberts, A.: ¬The Standard Generalized Markup Language for electronic patient records (1998) 0.04
    0.035642873 = product of:
      0.089107186 = sum of:
        0.05648775 = weight(_text_:context in 3625) [ClassicSimilarity], result of:
          0.05648775 = score(doc=3625,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 3625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3625)
        0.03261943 = weight(_text_:system in 3625) [ClassicSimilarity], result of:
          0.03261943 = score(doc=3625,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 3625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3625)
      0.4 = coord(2/5)
    
    Abstract
    Reports results of a pilot study, conducted at the Robert Jones and Agnes Hunt Orthopaedic Hospital, UK, which examined the use of SGML for the management of computerized patient records. 700 patients had their text patient records encoded in SGML an 14 of these had legacy, laboratory and other data included. Records were incorporated into a commercial SGML database to demonstrate in-context searching. A commercial SGML browser allowed rapid access to clinical events in large record. A study of comparative file sizes between formats was performed and the acceptability of computerized records was assessed with clinicians. Discusses the specifications for the system and the relationship with traditional technologies. Concludes that SGML represents a suitable format for the manipulation and publication of patient records
  6. Batley, S.: ¬The I in information architecture : the challenge of content management (2007) 0.03
    0.032023426 = product of:
      0.08005857 = sum of:
        0.064557426 = weight(_text_:context in 809) [ClassicSimilarity], result of:
          0.064557426 = score(doc=809,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.36633876 = fieldWeight in 809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0625 = fieldNorm(doc=809)
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 809) [ClassicSimilarity], result of:
              0.04650343 = score(doc=809,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=809)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to provide a review of content management in the context of information architecture. Design/methodology/approach - The method adopted is a review of definitions of information architecture and an analysis of the importance of content and its management within information architecture. Findings - Concludes that reality will not necessarily match the vision of organisations investing in information architecture. Originality/value - The paper considers practical issues around content and records management.
    Date
    23.12.2007 12:15:29
  7. Vasudevan, M.C.; Mohan, M.; Kapoor, A.: Information system for knowledge management in the specialized division of a hospital (2006) 0.03
    0.031520944 = product of:
      0.07880236 = sum of:
        0.06523886 = weight(_text_:system in 1499) [ClassicSimilarity], result of:
          0.06523886 = score(doc=1499,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 1499, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 1499) [ClassicSimilarity], result of:
              0.0406905 = score(doc=1499,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 1499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1499)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Information systems are essential support for knowledge management in all types of enterprises. This paper describes the evolution and development of a specialized hospital information system. The system is designed to integrate for access and retrieval from databases of patients' case records, and related images - CATSCAN, MRI, X-Ray - and to enable online access to full text of relevant papers on the Internet/WWW. The generation of information products and services from the system is briefly described.
    Date
    29. 2.2008 17:26:51
  8. Huang, T.; Mehrotra, S.; Ramchandran, K.: Multimedia Access and Retrieval System (MARS) project (1997) 0.03
    0.031472143 = product of:
      0.07868035 = sum of:
        0.06523886 = weight(_text_:system in 758) [ClassicSimilarity], result of:
          0.06523886 = score(doc=758,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 758, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 758) [ClassicSimilarity], result of:
              0.04032446 = score(doc=758,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=758)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Reports results of the MARS project, conducted at Illinois University, to bring together researchers in the fields of computer vision, compression, information management and database systems with the goal of developing an effective multimedia database management system. Describes the first step, involving the design and implementation of an image retrieval system incorporating novel approaches to image segmentation, representation, browsing and information retrieval supported by the developed system. Points to future directions for the MARS project
    Date
    22. 9.1997 19:16:05
  9. Hare, C.E.; McLeaod, J.; King, L.A.: Continuing professional development for the information discipline of records management : pt.1: context and initial indications of current activities (1996) 0.03
    0.027971694 = product of:
      0.069929235 = sum of:
        0.05648775 = weight(_text_:context in 5049) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5049,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5049)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 5049) [ClassicSimilarity], result of:
              0.04032446 = score(doc=5049,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 5049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5049)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Source
    Librarian career development. 4(1996) no.2, S.22-27
  10. Alexander, J.: Customs and excise process 2.5 million documents (1997) 0.03
    0.027288843 = product of:
      0.068222106 = sum of:
        0.05272096 = weight(_text_:system in 2427) [ClassicSimilarity], result of:
          0.05272096 = score(doc=2427,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 2427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2427)
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 2427) [ClassicSimilarity], result of:
              0.04650343 = score(doc=2427,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 2427, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2427)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The HM Customs and Excise operation in Salford, Manchster, UK, has installed an electronic document management system from Graphic Data to streamline handling of import entries. It aims was to reduce filing and storage and improve access to documentation. The system involves scanning documents and CD storage and retrieval. Because of legal admissibility issues, documentation is retained in its paper format in deep storage
    Date
    31.12.1998 9:53:29
  11. Rosman, G.; Meer, K.v.d.; Sol, H.G.: ¬The design of document information systems (1996) 0.03
    0.026320523 = product of:
      0.06580131 = sum of:
        0.046599183 = weight(_text_:system in 7750) [ClassicSimilarity], result of:
          0.046599183 = score(doc=7750,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3479797 = fieldWeight in 7750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=7750)
        0.019202124 = product of:
          0.057606373 = sum of:
            0.057606373 = weight(_text_:22 in 7750) [ClassicSimilarity], result of:
              0.057606373 = score(doc=7750,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 7750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7750)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Discusses the costs and benefits of documents information systems (involving text and images) and some design methodological aspects that arise from the documentary nature of the data. Reports details of a case study involving a specific document information system introduced at Press Ltd, a company in the Netherlands
    Source
    Journal of information science. 22(1996) no.4, S.287-297
  12. Schmitz-Esser, W.: ¬Die Pressedatenbank für Text und Bild des Verlagshauses Gruner + Jahr (1977) 0.02
    0.0215282 = product of:
      0.107641 = sum of:
        0.107641 = weight(_text_:index in 4756) [ClassicSimilarity], result of:
          0.107641 = score(doc=4756,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.5793543 = fieldWeight in 4756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.09375 = fieldNorm(doc=4756)
      0.2 = coord(1/5)
    
    Footnote
    In diesem Aufsatz wird auch der Aufbau des G+J Index ausführlich erläutert
  13. Mas, S.; Zaher, L'H.; Zacklad, M.: Design & evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.02
    0.021112198 = product of:
      0.052780494 = sum of:
        0.03727935 = weight(_text_:system in 2480) [ClassicSimilarity], result of:
          0.03727935 = score(doc=2480,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2480)
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 2480) [ClassicSimilarity], result of:
              0.04650343 = score(doc=2480,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 2480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2480)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    29. 8.2009 21:15:48
  14. Boyle, J.: ¬A blueprint for managing documents (1997) 0.02
    0.02105642 = product of:
      0.05264105 = sum of:
        0.03727935 = weight(_text_:system in 814) [ClassicSimilarity], result of:
          0.03727935 = score(doc=814,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=814)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 814) [ClassicSimilarity], result of:
              0.046085097 = score(doc=814,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=814)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Electronic document management systems are a collection of 3 complementary technologies: the repository, the workflow engine and the searching-and-indexing technology. The document repository stores, controls and manages documents. Workflow can eliminate the dead time a document spends in transition between works and integrates with the repository and electronic mail system. Search and indexing technology enables more efficient searching than standard full text technologies by configuring searches to specific attributes. Discusses how the technologies can be combined to manage a WWW site and offers advice on choosing an appropriate solution
    Source
    Byte. 22(1997) no.5, S.75-76,78,80
  15. Salminen, A.: Modeling documents in their context (2009) 0.02
    0.019567933 = product of:
      0.09783966 = sum of:
        0.09783966 = weight(_text_:context in 3847) [ClassicSimilarity], result of:
          0.09783966 = score(doc=3847,freq=6.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.5552027 = fieldWeight in 3847, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3847)
      0.2 = coord(1/5)
    
    Abstract
    This entry describes notions and methods for analyzing and modeling documents in an organizational context. A model for the analysis process is provided and methods for data gathering, modeling, and user needs analysis described. The methods have been originally developed and tested during document standardization activities carried out in the Finnish Parliament and ministries. Later the methods have been adopted and adapted in other Finnish organizations in their document management development projects. The methods are intended especially for cases where the goal is to develop an Extensible Markup Language (XML)-based solution for document management. This entry emphasizes the importance of analyzing and describing documents in their organizational context.
  16. Falk, H.: Document file searching (1998) 0.02
    0.017940167 = product of:
      0.08970083 = sum of:
        0.08970083 = weight(_text_:index in 2429) [ClassicSimilarity], result of:
          0.08970083 = score(doc=2429,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.48279524 = fieldWeight in 2429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.078125 = fieldNorm(doc=2429)
      0.2 = coord(1/5)
    
    Abstract
    Considers the importance of generating indexes when creating large document files, to facilitate searching, and evaluates 4 commercial document file index creation and searching software packages: QuickFind; Sonar; ZyIndex; and FastFind
  17. Celentano, A.; Fugini, M.G.; Pozzi, S.: Knowledge-based document retrieval in office environments : the Kabiria system (1995) 0.02
    0.016671833 = product of:
      0.08335916 = sum of:
        0.08335916 = weight(_text_:system in 3224) [ClassicSimilarity], result of:
          0.08335916 = score(doc=3224,freq=10.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.62248504 = fieldWeight in 3224, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
      0.2 = coord(1/5)
    
    Abstract
    Proposes a document retrieval model and system on the representation of knowledge describing the semantic contents of dicuments, the way in which the documents are managed by producers and by people in the office, and the application domain where the office operates. Discusses the knowledge representation issues needed for the document retrieval system and presents a document retrieval model that captures these issues. Describes such a system named Kabiria. Covers the querying and browsing environments and the architecture of the system
  18. Siegling, I.: Dateien auf dem Index : Dokumentenmanagement zu Hause (2000) 0.01
    0.014352133 = product of:
      0.07176066 = sum of:
        0.07176066 = weight(_text_:index in 5320) [ClassicSimilarity], result of:
          0.07176066 = score(doc=5320,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 5320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=5320)
      0.2 = coord(1/5)
    
  19. Schlenkrich, C.: Aspekte neuer Regelwerksarbeit : Multimediales Datenmodell für ARD und ZDF (2003) 0.01
    0.013616532 = product of:
      0.03404133 = sum of:
        0.02636048 = weight(_text_:system in 1515) [ClassicSimilarity], result of:
          0.02636048 = score(doc=1515,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.19684705 = fieldWeight in 1515, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1515)
        0.0076808496 = product of:
          0.023042548 = sum of:
            0.023042548 = weight(_text_:22 in 1515) [ClassicSimilarity], result of:
              0.023042548 = score(doc=1515,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15476047 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1515)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Wir sind mitten in der Arbeit, deshalb kann ich Ihnen nur Arbeitsstände weitergeben. Es ist im Fluss, und wir bemühen uns in der Tat, die "alten Regelwerke" fit zu machen und sie für den Multimediabereich aufzuarbeiten. Ganz kurz zur Arbeitsgruppe: Sie entstammt der AG Orgatec, der Schall- und Hörfunkarchivleiter- und der Fernseharchivleiterkonferenz zur Erstellung eines verbindlichen multimedialen Regelwerks. Durch die Digitalisierung haben sich die Aufgaben in den Archivbereichen eindeutig geändert. Wir versuchen, diese Prozesse abzufangen, und zwar vom Produktionsprozess bis hin zur Archivierung neu zu regeln und neu zu definieren. Wir haben mit unserer Arbeit begonnen im April letzten Jahres, sind also jetzt nahezu exakt ein Jahr zugange, und ich werde Ihnen im Laufe des kurzen Vortrages berichten können, wie wir unsere Arbeit gestaltet haben. Etwas zu den Mitgliedern der Arbeitsgruppe - ich denke, es ist ganz interessant, einfach mal zu sehen, aus welchen Bereichen und Spektren unsere Arbeitsgruppe sich zusammensetzt. Wir haben also Vertreter des Bayrischen Rundfunks, des Norddeutschen -, des Westdeutschen Rundfunks, des Mitteldeutschen von Ost nach West, von Süd nach Nord und aus den verschiedensten Arbeitsbereichen von Audio über Video bis hin zu Online- und Printbereichen. Es ist eine sehr bunt gemischte Truppe, aber auch eine hochspannenden Diskussion exakt eben aufgrund der Vielfalt, die wir abbilden wollen und abbilden müssen. Die Ziele: Wir wollen verbindlich ein multimediales Datenmodell entwickeln und verabschieden, was insbesondere den digitalen Produktionscenter und Archiv-Workflow von ARD und - da haben wir uns besonders gefreut - auch in guter alter Tradition in gemeinsamer Zusammenarbeit mit dem ZDF bildet. Wir wollen Erfassungs- und Erschließungsregeln definieren. Wir wollen Mittlerdaten generieren und bereitstellen, um den Produktions-Workflow abzubilden und zu gewährleisten, und das Datenmodell, das wir uns sozusagen als Zielstellung definiert haben, soll für den Programmaustausch Grundlagen schaffen, damit von System zu System intern und extern kommuniziert werden kann. Nun könnte man meinen, dass ein neues multimediales Datenmodell aus einem Mix der alten Regelwerke Fernsehen, Wort und Musik recht einfach zusammenzuführen sei. Man stellt einfach die Datenlisten der einzelnen Regelwerke synoptisch gegenüber, klärt Gemeinsames und Spezifisches ab, ergänzt Fehlendes, eliminiert eventuell nicht Benötigtes und stellt es einfach neu zusammen, fertig ist das neue Regelwerk. Leider ist es nicht ganz so einfach, denn es gibt dabei doch eine ganze Reihe von Aspekten zu berücksichtigen, die eine vorgelagerte Abstraktionsebene auch zwingend erforderlich machen.
    Date
    22. 4.2003 12:05:56
  20. Automatische Klassifikation und Extraktion in Documentum (2005) 0.01
    0.013195123 = product of:
      0.032987807 = sum of:
        0.023299592 = weight(_text_:system in 3974) [ClassicSimilarity], result of:
          0.023299592 = score(doc=3974,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 3974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3974)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 3974) [ClassicSimilarity], result of:
              0.029064644 = score(doc=3974,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 3974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3974)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Content
    "LCI Comprend ist ab sofort als integriertes Modul für EMCs Content Management System Documentum verfügbar. LCI (Learning Computers International GmbH) hat mit Unterstützung von neeb & partner diese Technologie zur Dokumentenautomation transparent in Documentum integriert. Dies ist die erste bekannte Lösung für automatische, lernende Klassifikation und Extraktion, die direkt auf dem Documentum Datenbestand arbeitet und ohne zusätzliche externe Steuerung auskommt. Die LCI Information Capture Services (ICS) dienen dazu, jegliche Art von Dokument zu klassifizieren und Information daraus zu extrahieren. Das Dokument kann strukturiert, halbstrukturiert oder unstrukturiert sein. Somit können beispielsweise gescannte Formulare genauso verarbeitet werden wie Rechnungen oder E-Mails. Die Extraktions- und Klassifikationsvorschriften und die zu lernenden Beispieldokumente werden einfach interaktiv zusammengestellt und als XML-Struktur gespeichert. Zur Laufzeit wird das Projekt angewendet, um unbekannte Dokumente aufgrund von Regeln und gelernten Beispielen automatisch zu indexieren. Dokumente können damit entweder innerhalb von Documentum oder während des Imports verarbeitet werden. Der neue Server erlaubt das Einlesen von Dateien aus dem Dateisystem oder direkt von POPS-Konten, die Analyse der Dokumente und die automatische Erzeugung von Indexwerten bei der Speicherung in einer Documentum Ablageumgebung. Diese Indexwerte, die durch inhaltsbasierte, auch mehrthematische Klassifikation oder durch Extraktion gewonnen wurden, werden als vordefinierte Attribute mit dem Documentum-Objekt abgelegt. Handelt es sich um ein gescanntes Dokument oder ein Fax, wird automatisch die integrierte Volltext-Texterkennung durchgeführt."
    Footnote
    Kontakt: LCI GmbH, Freiburger Str. 16, 16,79199 Kirchzarten, Tel.: (0 76 61) 9 89 961o, Fax: (01212) 5 37 48 29 36, info@lci-software.com, www.lci-software.com

Years

Languages

  • e 74
  • d 32
  • f 2
  • a 1
  • sp 1
  • More… Less…

Types

  • a 92
  • m 8
  • x 5
  • s 3
  • r 2
  • el 1
  • More… Less…