Search (201 results, page 1 of 11)

  • × theme_ss:"Dokumentenmanagement"
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.43
    0.4321077 = product of:
      0.74075603 = sum of:
        0.054483652 = product of:
          0.16345096 = sum of:
            0.16345096 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.16345096 = score(doc=2918,freq=2.0), product of:
                0.29082868 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03430388 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.16345096 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.16345096 = score(doc=2918,freq=2.0), product of:
            0.29082868 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03430388 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.009910721 = weight(_text_:information in 2918) [ClassicSimilarity], result of:
          0.009910721 = score(doc=2918,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.16457605 = fieldWeight in 2918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.16345096 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.16345096 = score(doc=2918,freq=2.0), product of:
            0.29082868 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03430388 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.16345096 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.16345096 = score(doc=2918,freq=2.0), product of:
            0.29082868 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03430388 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.02255783 = weight(_text_:system in 2918) [ClassicSimilarity], result of:
          0.02255783 = score(doc=2918,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.20878783 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.16345096 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.16345096 = score(doc=2918,freq=2.0), product of:
            0.29082868 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03430388 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5833333 = coord(7/12)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Source
    System Sciences, 2009. HICSS '09. 42nd Hawaii International Conference
  2. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.06
    0.06092712 = product of:
      0.18278135 = sum of:
        0.012109872 = weight(_text_:web in 1833) [ClassicSimilarity], result of:
          0.012109872 = score(doc=1833,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.108171105 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.0070079383 = weight(_text_:information in 1833) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=1833,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 1833, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.011278915 = weight(_text_:system in 1833) [ClassicSimilarity], result of:
          0.011278915 = score(doc=1833,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.104393914 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.15238462 = sum of:
          0.13844152 = weight(_text_:aufsatzsammlung in 1833) [ClassicSimilarity], result of:
            0.13844152 = score(doc=1833,freq=16.0), product of:
              0.2250708 = queryWeight, product of:
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.03430388 = queryNorm
              0.61510205 = fieldWeight in 1833, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.013943106 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.013943106 = score(doc=1833,freq=2.0), product of:
              0.120126344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03430388 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.33333334 = coord(4/12)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
    LCSH
    Information technology / Management / Congresses
    RSWK
    Mediendokumentation / Aufsatzsammlung
    Medien / Informationsmanagement / Aufsatzsammlung
    Pressearchiv / Aufsatzsammlung (HBZ)
    Rundfunkarchiv / Aufsatzsammlung (HBZ)
    Subject
    Mediendokumentation / Aufsatzsammlung
    Medien / Informationsmanagement / Aufsatzsammlung
    Pressearchiv / Aufsatzsammlung (HBZ)
    Rundfunkarchiv / Aufsatzsammlung (HBZ)
    Information technology / Management / Congresses
  3. Rosman, G.; Meer, K.v.d.; Sol, H.G.: ¬The design of document information systems (1996) 0.02
    0.021048672 = product of:
      0.08419469 = sum of:
        0.023359794 = weight(_text_:information in 7750) [ClassicSimilarity], result of:
          0.023359794 = score(doc=7750,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.38790947 = fieldWeight in 7750, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7750)
        0.037596382 = weight(_text_:system in 7750) [ClassicSimilarity], result of:
          0.037596382 = score(doc=7750,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3479797 = fieldWeight in 7750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=7750)
        0.023238512 = product of:
          0.046477024 = sum of:
            0.046477024 = weight(_text_:22 in 7750) [ClassicSimilarity], result of:
              0.046477024 = score(doc=7750,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.38690117 = fieldWeight in 7750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7750)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Discusses the costs and benefits of documents information systems (involving text and images) and some design methodological aspects that arise from the documentary nature of the data. Reports details of a case study involving a specific document information system introduced at Press Ltd, a company in the Netherlands
    Source
    Journal of information science. 22(1996) no.4, S.287-297
  4. Huang, T.; Mehrotra, S.; Ramchandran, K.: Multimedia Access and Retrieval System (MARS) project (1997) 0.02
    0.020765753 = product of:
      0.083063014 = sum of:
        0.014161124 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.014161124 = score(doc=758,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23515764 = fieldWeight in 758, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.052634936 = weight(_text_:system in 758) [ClassicSimilarity], result of:
          0.052634936 = score(doc=758,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.4871716 = fieldWeight in 758, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.016266957 = product of:
          0.032533914 = sum of:
            0.032533914 = weight(_text_:22 in 758) [ClassicSimilarity], result of:
              0.032533914 = score(doc=758,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.2708308 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Reports results of the MARS project, conducted at Illinois University, to bring together researchers in the fields of computer vision, compression, information management and database systems with the goal of developing an effective multimedia database management system. Describes the first step, involving the design and implementation of an image retrieval system incorporating novel approaches to image segmentation, representation, browsing and information retrieval supported by the developed system. Points to future directions for the MARS project
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Department of Library and Information Science
  5. Specht, G.: Architekturen von Multimedia-Datenbanksystemen zur Speicherung von Bildern und Videos (1998) 0.02
    0.017627032 = product of:
      0.10576219 = sum of:
        0.075685084 = weight(_text_:suche in 17) [ClassicSimilarity], result of:
          0.075685084 = score(doc=17,freq=2.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.441602 = fieldWeight in 17, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.0625 = fieldNorm(doc=17)
        0.030077105 = weight(_text_:system in 17) [ClassicSimilarity], result of:
          0.030077105 = score(doc=17,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.27838376 = fieldWeight in 17, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=17)
      0.16666667 = coord(2/12)
    
    Abstract
    Dieses Papier stellt, ausgehend von der Architektur konventioneller Datenbanksysteme und den demgegenüber neuen Anforderungen an Multimedia-Datenbanksystemen, vier verschiedene Basisarchitekturen für Multimedia-Datenbanksysteme vor. Im letzten Abschnitt wird als ein Beispiel das System MultiMAP vorgestellt, ein multimediales Datenbanksystem, das an der TU München entwickelt wurde
    Source
    Inhaltsbezogene Suche von Bildern und Videosequenzen in digitalen multimedialen Archiven: Beiträge eines Workshops der KI'98 am 16./17.9.1998 in Bremen. Hrsg.: N. Luth
  6. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.02
    0.015544125 = product of:
      0.0621765 = sum of:
        0.037842542 = weight(_text_:suche in 1507) [ClassicSimilarity], result of:
          0.037842542 = score(doc=1507,freq=2.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.220801 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.015038553 = weight(_text_:system in 1507) [ClassicSimilarity], result of:
          0.015038553 = score(doc=1507,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.13919188 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.009295405 = product of:
          0.01859081 = sum of:
            0.01859081 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.01859081 = score(doc=1507,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  7. Casey, C.: ¬The cyberarchive : a look at the storage and preservation of Web sites (1998) 0.01
    0.014682835 = product of:
      0.088097006 = sum of:
        0.07992108 = weight(_text_:web in 2987) [ClassicSimilarity], result of:
          0.07992108 = score(doc=2987,freq=16.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.71389294 = fieldWeight in 2987, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2987)
        0.008175928 = weight(_text_:information in 2987) [ClassicSimilarity], result of:
          0.008175928 = score(doc=2987,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 2987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2987)
      0.16666667 = coord(2/12)
    
    Abstract
    Although librarians recognize the Internet as a resource for knowledge and information, they have yet to make a formal effort to collect and preserve the Web sites found there. Addresses the need to set up a cyberarchive and some of the issues involved. With Web sites appearing and disappearing constantly from the Internet, there is an immediate need to recognize that they are precious part of cultural and intellectual history and to preserve them for future study. Issues discussed include: Web site authorship vs. Web space ownership; physical media used to hold Web sites (hard drive, mainframe, CD-ROMs); collection development; acquiring Web sites; and adding Web sites to a collection
  8. Celentano, A.; Fugini, M.G.; Pozzi, S.: Knowledge-based document retrieval in office environments : the Kabiria system (1995) 0.01
    0.012766397 = product of:
      0.076598376 = sum of:
        0.009343918 = weight(_text_:information in 3224) [ClassicSimilarity], result of:
          0.009343918 = score(doc=3224,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1551638 = fieldWeight in 3224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
        0.067254454 = weight(_text_:system in 3224) [ClassicSimilarity], result of:
          0.067254454 = score(doc=3224,freq=10.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.62248504 = fieldWeight in 3224, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
      0.16666667 = coord(2/12)
    
    Abstract
    Proposes a document retrieval model and system on the representation of knowledge describing the semantic contents of dicuments, the way in which the documents are managed by producers and by people in the office, and the application domain where the office operates. Discusses the knowledge representation issues needed for the document retrieval system and presents a document retrieval model that captures these issues. Describes such a system named Kabiria. Covers the querying and browsing environments and the architecture of the system
    Source
    ACM transactions on information systems. 13(1995) no.3, S.237-268
  9. Vasudevan, M.C.; Mohan, M.; Kapoor, A.: Information system for knowledge management in the specialized division of a hospital (2006) 0.01
    0.0121102985 = product of:
      0.07266179 = sum of:
        0.020026851 = weight(_text_:information in 1499) [ClassicSimilarity], result of:
          0.020026851 = score(doc=1499,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3325631 = fieldWeight in 1499, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
        0.052634936 = weight(_text_:system in 1499) [ClassicSimilarity], result of:
          0.052634936 = score(doc=1499,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.4871716 = fieldWeight in 1499, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
      0.16666667 = coord(2/12)
    
    Abstract
    Information systems are essential support for knowledge management in all types of enterprises. This paper describes the evolution and development of a specialized hospital information system. The system is designed to integrate for access and retrieval from databases of patients' case records, and related images - CATSCAN, MRI, X-Ray - and to enable online access to full text of relevant papers on the Internet/WWW. The generation of information products and services from the system is briefly described.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Theme
    Information Resources Management
  10. Murthy, S.S.: ¬The National Tuberculosis Institute, Bangalore : recent development in library and information services (2006) 0.01
    0.010726172 = product of:
      0.06435703 = sum of:
        0.045669187 = weight(_text_:web in 1502) [ClassicSimilarity], result of:
          0.045669187 = score(doc=1502,freq=4.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.4079388 = fieldWeight in 1502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.018687837 = weight(_text_:information in 1502) [ClassicSimilarity], result of:
          0.018687837 = score(doc=1502,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3103276 = fieldWeight in 1502, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.16666667 = coord(2/12)
    
    Abstract
    Briefly describes the information products and services, the related databases, development of digital library and web-resources and web-based services, vocabulary control tools, networking, and other projects of the Library of the National Tuberculosis Institute (NTI), Bangalore. Acknowledges the involvement of and advice and assistance provided by Prof. A. Neelameghan to these programmes and projects.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Theme
    Information Resources Management
  11. Bondarenko, O.; Janssen, R.; Driessen, S.: Requirements for the design of a personal document-management system (2010) 0.01
    0.010699574 = product of:
      0.06419744 = sum of:
        0.0115625085 = weight(_text_:information in 3430) [ClassicSimilarity], result of:
          0.0115625085 = score(doc=3430,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1920054 = fieldWeight in 3430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3430)
        0.052634936 = weight(_text_:system in 3430) [ClassicSimilarity], result of:
          0.052634936 = score(doc=3430,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.4871716 = fieldWeight in 3430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3430)
      0.16666667 = coord(2/12)
    
    Abstract
    In this article a set of requirements for the design of a personal document management system is presented, based on the results of three research studies (Bondarenko, [2006]; Bondarenko & Janssen, [2005]; Bondarenko & Janssen, [2009]). We propose a framework, based on layers of task decomposition, that helps to understand the needs of information workers with regard to personal document and task management. Relevant user processes are described and requirements for a document-management system are derived for each layer. The derived requirements are compared to related studies, and implications for system design are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.3, S.468-482
  12. Dahmen, E.: Klassifikation als Ordnundssystem im elektronischen Pressearchiv (2003) 0.01
    0.010468807 = product of:
      0.06281284 = sum of:
        0.05351744 = weight(_text_:suche in 1513) [ClassicSimilarity], result of:
          0.05351744 = score(doc=1513,freq=4.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.31225976 = fieldWeight in 1513, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03125 = fieldNorm(doc=1513)
        0.009295405 = product of:
          0.01859081 = sum of:
            0.01859081 = weight(_text_:22 in 1513) [ClassicSimilarity], result of:
              0.01859081 = score(doc=1513,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.15476047 = fieldWeight in 1513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1513)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    Elektronische Pressearchive bieten einen schnellen und bequemen Zugriff auf einzelne Presseartikel. Während die ersten elektronischen Pressearchive noch mit Referenzdatensätzen arbeiteten und den gesamten Text nur als Bilddatei ablegten, ermöglichen verbesserte Speicherkapazitäten heute die Archivierung vollständiger Texte, mit Hilfe einer guten OCR-Erkennung sind zudem alle Wörter des Textes im Volltext recherchierbar. Der punktuelle Zugriff auf ein spezielles Dokument ist also prinzipiell bereits ohne die Nutzung beschreibender Daten möglich. Je spezifischer, eindeutiger und seltener der gesuchte Begriff ist, desto schneller kann ein passendes Dokument gefunden werden - oft war dies in einer konventionellen Sammlung gerade nicht der Fall, hier mußte man manchmal mit Geduld die "Stecknadel im Heuhaufen" suchen. Sog. "Volltextarchive" finden sich in großer Zahl im Internet, jeder kann dort über die Eingabe eines oder mehrerer Wörter nach Presseartikeln suchen, wird aber schnell feststellen, daß die auf diesem Weg erzielte Treffermenge nicht zu vergleichen ist mit der Anordnung von Presseausschnitten, die mit Hilfe einer Klassifikation in eine Aufstellungssystematik gebracht wurden. Diese Zugriffsmöglichkeit wird in professionell arbeitenden Archiven verständlicherweise als unzureichend empfunden, und aus diesem Grund werden ausgewählte Presseartikel weiterhin inhaltlich erschlossen, es werden also zusätzliche rechercherelevante Daten produziert. Diese beim Indexat erstellten Metadaten setzen sich zusammen aus Formaldaten, evtl. künstlichen Ordnungsmerkmalen, Sachbegriffen und natürlich Eigennamen wie z.B. Namen von Personen, Körperschaften, Ländern, Sendetiteln und anderen Individualbegriffen. Präzise Begriffe mit eindeutiger Benennung und Eigennamen können im elektronischen Archiv hervorragend recherchiert werden, denn in einer elektronischen Datenbank funktioniert die Suche technisch ohnehin nur nach eindeutigen Schriftzeichen, also nach geordneten Buchstaben und Zahlen. Diese "rechnerimmanente" Technik hat die Vorstellung, alles über die bloße Eingabe von Wörtern zu suchen, möglicherweise noch voran getrieben. Auch die Popularisierung von Suchmaschinen im Internet hat dazu beigetragen, diese Suchmöglichkeit für die einzig wahre zu erachten. Wie steht es aber mit der thematischen Suche? Systematischer und alphabetischer Zugriff ist ja keine Entweder-Oder-Frage: es kommt auf die Suchanfrage an! Wir postulieren also: beides sollte möglich sein.
    Date
    28. 4.2003 13:35:22
  13. D'Harcourt, J.-C.: Integrating documentation into the company information system with SGML (1995) 0.01
    0.010203881 = product of:
      0.061223287 = sum of:
        0.018687837 = weight(_text_:information in 2436) [ClassicSimilarity], result of:
          0.018687837 = score(doc=2436,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3103276 = fieldWeight in 2436, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2436)
        0.04253545 = weight(_text_:system in 2436) [ClassicSimilarity], result of:
          0.04253545 = score(doc=2436,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 2436, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2436)
      0.16666667 = coord(2/12)
    
    Abstract
    Increased competition has forced many industries to cut production costs, to reduce the time needed to bring products to market, and to better satisfy customer needs. Furthermore, the internationalization of business has caused an enormous increase in the need for communication and information exchange. Describes how SGML, when considered as an integral part of company's information system, can help meet these challenges and in so doing provide competitive advantage
    Source
    Managing information. 2(1995) no.3, S.25-27
  14. Mitchell, L.M.: Scottish Record Office computerised records location system (1997) 0.01
    0.010135144 = product of:
      0.060810864 = sum of:
        0.008175928 = weight(_text_:information in 696) [ClassicSimilarity], result of:
          0.008175928 = score(doc=696,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=696)
        0.052634936 = weight(_text_:system in 696) [ClassicSimilarity], result of:
          0.052634936 = score(doc=696,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.4871716 = fieldWeight in 696, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=696)
      0.16666667 = coord(2/12)
    
    Abstract
    Describes the survey of the Scottish Record Office's entire holding of about 21 kilometers of records and the creation of the computerized records location system using Microsoft Access. The process took from Sep 93 to Spring 95. The system is based on 3 interlinked tables which give: room details, containing the number of each room on each floor, in each building; bay details, containing details of the collections in each room; and collection details, containing details of the collections in each bay. Combining data from the tables gives precise information on space use and availabiblity. Explains the use of the tables, describes the graphic display and concludes that the system has provided a valuable tool for the records office
  15. Pritchard, J.A.T.: Integrated text and image management (1991) 0.01
    0.00963776 = product of:
      0.05782656 = sum of:
        0.020230178 = weight(_text_:information in 3145) [ClassicSimilarity], result of:
          0.020230178 = score(doc=3145,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3359395 = fieldWeight in 3145, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3145)
        0.037596382 = weight(_text_:system in 3145) [ClassicSimilarity], result of:
          0.037596382 = score(doc=3145,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3479797 = fieldWeight in 3145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=3145)
      0.16666667 = coord(2/12)
    
    Abstract
    An important recent development in information retrieval is the integration of several technologies (images, text, graphical data) into powerful, user friendly, text and image multimedia information systems. Provides examples of selected commercial developments such as the Topic System (by Verity, California)
    Source
    Information management report. 1991, Dec., S.14-16
  16. Hendley, T.: Planning and implementing an integrated document management system : a checklist of points to consider (1995) 0.01
    0.009291625 = product of:
      0.055749744 = sum of:
        0.013214295 = weight(_text_:information in 1995) [ClassicSimilarity], result of:
          0.013214295 = score(doc=1995,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.21943474 = fieldWeight in 1995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1995)
        0.04253545 = weight(_text_:system in 1995) [ClassicSimilarity], result of:
          0.04253545 = score(doc=1995,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 1995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=1995)
      0.16666667 = coord(2/12)
    
    Abstract
    Offers a checklist of points to consider when installing a document management system discussing each issue in turn: scoping the project; strategic issues (business process reengineering, information technology review, document and records management review); options for meeting objectives review; functional requirements defintion; technical requirements definition; and cost benefit analysis. Once these issues have been considered an operational requirement or invitation to tender should be drawn up for producement, as well as evaluation criteria for the responses
    Source
    Information management and technology. 28(1995) no.2, S.63-66
  17. Boeri, R.J.; Hensel, M.: Set up a winning text retrieval system : carefully (1995) 0.01
    0.009291625 = product of:
      0.055749744 = sum of:
        0.013214295 = weight(_text_:information in 2809) [ClassicSimilarity], result of:
          0.013214295 = score(doc=2809,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.21943474 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2809)
        0.04253545 = weight(_text_:system in 2809) [ClassicSimilarity], result of:
          0.04253545 = score(doc=2809,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2809)
      0.16666667 = coord(2/12)
    
    Abstract
    Considers some of the practical issues involved when a company plans to develop an in house computerized document management system: conversion of paper to electronic form via optical character recognition (OCR) or rekeying; coding of document elements using SGML; indexing for information searching and retrieval (including proximity searching); and hybrid CD-ROM and online information retrieval systems
  18. Mateika, O,: Feasibility-Studie zur Eignung der Pressedatenbank Archimedes zum Einsatz in der Pressedokumentation des Norddeutschen Rundfunks (2004) 0.01
    0.009291625 = product of:
      0.055749744 = sum of:
        0.013214295 = weight(_text_:information in 3712) [ClassicSimilarity], result of:
          0.013214295 = score(doc=3712,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.21943474 = fieldWeight in 3712, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3712)
        0.04253545 = weight(_text_:system in 3712) [ClassicSimilarity], result of:
          0.04253545 = score(doc=3712,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 3712, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3712)
      0.16666667 = coord(2/12)
    
    Abstract
    Das Datenbanksystem Planet, derzeit eingesetzt als Information Retrieval System in Pressearchiven innerhalb des SAD-Verbunds der ARD, soll durch ein mindestens gleichwertiges System abgelöst werden. Archimedes, derzeit eingesetzt im Dokumentationsbereich des Westdeutschen Rundfunks Köln, ist eine mögliche Alternative. Ob es die Vorgaben und Anforderungen erfüllt, wird mit Hilfe einer Feasibility-Studie geprüft, notwendige Funktionalitäten und strategisch-qualitative Anforderungen bewertet.
    Imprint
    Hamburg : Hochschule für Angewandte Wissenschaften, FB Bibliothek und Information
  19. Altenhofen, C.; Stanisic-Petrovic, M.; Kieninger, T.; Hofmann, H.R.: Werkzeugeinsatz im Umfeld der Dokumentenverwaltung (2003) 0.01
    0.009260353 = product of:
      0.055562112 = sum of:
        0.008258934 = weight(_text_:information in 1824) [ClassicSimilarity], result of:
          0.008258934 = score(doc=1824,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 1824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1824)
        0.047303177 = weight(_text_:suche in 1824) [ClassicSimilarity], result of:
          0.047303177 = score(doc=1824,freq=2.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.27600124 = fieldWeight in 1824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1824)
      0.16666667 = coord(2/12)
    
    Abstract
    Die Menge der täglich zu bewältigenden Papierflut wächst immer noch an. Weltweit werden täglich 200 Millionen Seiten Papier in Ordnern abgeheftet und mehr als 25o km neue Aktenordner angelegt. Diese gigantische Menge an papiergebundener Information wird noch durch die stark zunehmende Zahl elektronischer Dokumente ergänzt, so dass die in den letzten Jahren getätigten Aussagen, dass sich die zu verarbeitende Informationsmenge im Schnitt alle 2 bis 6 Jahre verdoppelt, als durchaus realistisch einzuschätzen sind. Diese Flut von Informationen erschwert in Unternehmen die bedarfsgerechte Verteilung und gezielte Suche nach wirklich relevanten Aussagen und Inhalten. Das Institut für Arbeitswissenschaft und Technologiemanagement (IAT) der Universität Stuttgart (Kooperationspartner des Fraunhofer IAO), das Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), und die Océ Document Technologies GmbH haben eine Studie zum "Werkzeugeinsatz im Umfeld der Dokumentenverwaltung" publiziert. In der Studie werden die Ergebnisse, die im Rahmen des Verbundprojekts "Adaptive READ" (www.adaptive-read.de) in zwei durchgeführten Befragungen zum "Werkzeugeinsatz im Umfeld der Dokumentenverwaltung" erzielt wurden, dargestellt. Die Studie beleuchtet sowohl das Umfeld wie auch den aktuellen Einsatz von Werkzeugen zur Dokumentenverwaltung, behandelt aber auch Herausforderungen und Probleme bei der Systemeinführung. In diesem Beitrag werden Ergebnisse der Studie auszugsweise dargestellt, wobei auf die Ergebnisse der zweiten Befragung fokussiert wird.
    Source
    Information - Wissenschaft und Praxis. 54(2003) H.5, S.281-288
  20. Merve, N. v.d.: ¬The integration of document image processing and text retrieval principles (1993) 0.01
    0.008646562 = product of:
      0.05187937 = sum of:
        0.009343918 = weight(_text_:information in 6564) [ClassicSimilarity], result of:
          0.009343918 = score(doc=6564,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1551638 = fieldWeight in 6564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6564)
        0.04253545 = weight(_text_:system in 6564) [ClassicSimilarity], result of:
          0.04253545 = score(doc=6564,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 6564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6564)
      0.16666667 = coord(2/12)
    
    Abstract
    Only 10% of information used by an organisation is in electronic form hence the need to examine ways of processing and loading paper documents automatically in an electronic database. Discusses the principles of a document image processing (DIP) system; the difference between text retrieval and DIP; text retrieval systems and concept retrieval. Describes the TOPIC intelligent text retrieval system based on concept retrieval. Covers the TYPO operator developed for mispellings, character transpositions and 'dirty' text retrieved as output form OCR processes. Refers to an electronic news clipping service application

Years

Languages

  • e 123
  • d 70
  • f 4
  • sp 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 165
  • m 16
  • x 11
  • s 6
  • r 3
  • el 2
  • More… Less…