Search (200 results, page 1 of 10)

  • × theme_ss:"Dokumentenmanagement"
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.36
    0.36063033 = product of:
      0.63110304 = sum of:
        0.023380058 = product of:
          0.11690029 = sum of:
            0.11690029 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.11690029 = score(doc=2918,freq=2.0), product of:
                0.20800096 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02453417 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.2 = coord(1/5)
        0.016133383 = weight(_text_:system in 2918) [ClassicSimilarity], result of:
          0.016133383 = score(doc=2918,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11690029 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11690029 = score(doc=2918,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11690029 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11690029 = score(doc=2918,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11690029 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11690029 = score(doc=2918,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11690029 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11690029 = score(doc=2918,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.0070881573 = weight(_text_:information in 2918) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=2918,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 2918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11690029 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11690029 = score(doc=2918,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5714286 = coord(8/14)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Source
    System Sciences, 2009. HICSS '09. 42nd Hawaii International Conference
  2. Huang, T.; Mehrotra, S.; Ramchandran, K.: Multimedia Access and Retrieval System (MARS) project (1997) 0.03
    0.026894525 = product of:
      0.09413083 = sum of:
        0.037644558 = weight(_text_:system in 758) [ClassicSimilarity], result of:
          0.037644558 = score(doc=758,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.4871716 = fieldWeight in 758, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.010128049 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.010128049 = score(doc=758,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 758, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.034724083 = weight(_text_:retrieval in 758) [ClassicSimilarity], result of:
          0.034724083 = score(doc=758,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.46789268 = fieldWeight in 758, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=758)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 758) [ClassicSimilarity], result of:
              0.02326829 = score(doc=758,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    Reports results of the MARS project, conducted at Illinois University, to bring together researchers in the fields of computer vision, compression, information management and database systems with the goal of developing an effective multimedia database management system. Describes the first step, involving the design and implementation of an image retrieval system incorporating novel approaches to image segmentation, representation, browsing and information retrieval supported by the developed system. Points to future directions for the MARS project
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Department of Library and Information Science
    Source
    Digital image access and retrieval: Proceedings of the 1996 Clinic on Library Applications of Data Processing, 24-26 Mar 1996. Ed.: P.B. Heidorn u. B. Sandore
  3. Schlenkrich, C.: Aspekte neuer Regelwerksarbeit : Multimediales Datenmodell für ARD und ZDF (2003) 0.02
    0.023251332 = product of:
      0.16275932 = sum of:
        0.015210699 = weight(_text_:system in 1515) [ClassicSimilarity], result of:
          0.015210699 = score(doc=1515,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.19684705 = fieldWeight in 1515, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1515)
        0.14754862 = sum of:
          0.13425244 = weight(_text_:datenmodell in 1515) [ClassicSimilarity], result of:
            0.13425244 = score(doc=1515,freq=8.0), product of:
              0.19304088 = queryWeight, product of:
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.02453417 = queryNorm
              0.6954612 = fieldWeight in 1515, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.03125 = fieldNorm(doc=1515)
          0.0132961655 = weight(_text_:22 in 1515) [ClassicSimilarity], result of:
            0.0132961655 = score(doc=1515,freq=2.0), product of:
              0.085914485 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02453417 = queryNorm
              0.15476047 = fieldWeight in 1515, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1515)
      0.14285715 = coord(2/14)
    
    Abstract
    Wir sind mitten in der Arbeit, deshalb kann ich Ihnen nur Arbeitsstände weitergeben. Es ist im Fluss, und wir bemühen uns in der Tat, die "alten Regelwerke" fit zu machen und sie für den Multimediabereich aufzuarbeiten. Ganz kurz zur Arbeitsgruppe: Sie entstammt der AG Orgatec, der Schall- und Hörfunkarchivleiter- und der Fernseharchivleiterkonferenz zur Erstellung eines verbindlichen multimedialen Regelwerks. Durch die Digitalisierung haben sich die Aufgaben in den Archivbereichen eindeutig geändert. Wir versuchen, diese Prozesse abzufangen, und zwar vom Produktionsprozess bis hin zur Archivierung neu zu regeln und neu zu definieren. Wir haben mit unserer Arbeit begonnen im April letzten Jahres, sind also jetzt nahezu exakt ein Jahr zugange, und ich werde Ihnen im Laufe des kurzen Vortrages berichten können, wie wir unsere Arbeit gestaltet haben. Etwas zu den Mitgliedern der Arbeitsgruppe - ich denke, es ist ganz interessant, einfach mal zu sehen, aus welchen Bereichen und Spektren unsere Arbeitsgruppe sich zusammensetzt. Wir haben also Vertreter des Bayrischen Rundfunks, des Norddeutschen -, des Westdeutschen Rundfunks, des Mitteldeutschen von Ost nach West, von Süd nach Nord und aus den verschiedensten Arbeitsbereichen von Audio über Video bis hin zu Online- und Printbereichen. Es ist eine sehr bunt gemischte Truppe, aber auch eine hochspannenden Diskussion exakt eben aufgrund der Vielfalt, die wir abbilden wollen und abbilden müssen. Die Ziele: Wir wollen verbindlich ein multimediales Datenmodell entwickeln und verabschieden, was insbesondere den digitalen Produktionscenter und Archiv-Workflow von ARD und - da haben wir uns besonders gefreut - auch in guter alter Tradition in gemeinsamer Zusammenarbeit mit dem ZDF bildet. Wir wollen Erfassungs- und Erschließungsregeln definieren. Wir wollen Mittlerdaten generieren und bereitstellen, um den Produktions-Workflow abzubilden und zu gewährleisten, und das Datenmodell, das wir uns sozusagen als Zielstellung definiert haben, soll für den Programmaustausch Grundlagen schaffen, damit von System zu System intern und extern kommuniziert werden kann. Nun könnte man meinen, dass ein neues multimediales Datenmodell aus einem Mix der alten Regelwerke Fernsehen, Wort und Musik recht einfach zusammenzuführen sei. Man stellt einfach die Datenlisten der einzelnen Regelwerke synoptisch gegenüber, klärt Gemeinsames und Spezifisches ab, ergänzt Fehlendes, eliminiert eventuell nicht Benötigtes und stellt es einfach neu zusammen, fertig ist das neue Regelwerk. Leider ist es nicht ganz so einfach, denn es gibt dabei doch eine ganze Reihe von Aspekten zu berücksichtigen, die eine vorgelagerte Abstraktionsebene auch zwingend erforderlich machen.
    Date
    22. 4.2003 12:05:56
  4. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.02
    0.023096127 = product of:
      0.080836445 = sum of:
        0.008066691 = weight(_text_:system in 1833) [ClassicSimilarity], result of:
          0.008066691 = score(doc=1833,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.104393914 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.0050120843 = weight(_text_:information in 1833) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=1833,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 1833, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.0074408753 = weight(_text_:retrieval in 1833) [ClassicSimilarity], result of:
          0.0074408753 = score(doc=1833,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.10026272 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.06031679 = sum of:
          0.050344665 = weight(_text_:datenmodell in 1833) [ClassicSimilarity], result of:
            0.050344665 = score(doc=1833,freq=2.0), product of:
              0.19304088 = queryWeight, product of:
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.02453417 = queryNorm
              0.26079795 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.009972124 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.009972124 = score(doc=1833,freq=2.0), product of:
              0.085914485 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02453417 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.2857143 = coord(4/14)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
    LCSH
    Information technology / Management / Congresses
    Subject
    Information technology / Management / Congresses
  5. Celentano, A.; Fugini, M.G.; Pozzi, S.: Knowledge-based document retrieval in office environments : the Kabiria system (1995) 0.02
    0.020243121 = product of:
      0.0944679 = sum of:
        0.048100453 = weight(_text_:system in 3224) [ClassicSimilarity], result of:
          0.048100453 = score(doc=3224,freq=10.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.62248504 = fieldWeight in 3224, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
        0.006682779 = weight(_text_:information in 3224) [ClassicSimilarity], result of:
          0.006682779 = score(doc=3224,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 3224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
        0.03968467 = weight(_text_:retrieval in 3224) [ClassicSimilarity], result of:
          0.03968467 = score(doc=3224,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.5347345 = fieldWeight in 3224, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3224)
      0.21428572 = coord(3/14)
    
    Abstract
    Proposes a document retrieval model and system on the representation of knowledge describing the semantic contents of dicuments, the way in which the documents are managed by producers and by people in the office, and the application domain where the office operates. Discusses the knowledge representation issues needed for the document retrieval system and presents a document retrieval model that captures these issues. Describes such a system named Kabiria. Covers the querying and browsing environments and the architecture of the system
    Source
    ACM transactions on information systems. 13(1995) no.3, S.237-268
  6. Toebak, P.: ¬Das Dossier nicht die Klassifikation als Herzstück des Records Management (2009) 0.02
    0.019922923 = product of:
      0.13946046 = sum of:
        0.004176737 = weight(_text_:information in 3220) [ClassicSimilarity], result of:
          0.004176737 = score(doc=3220,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 3220, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3220)
        0.13528372 = sum of:
          0.11866352 = weight(_text_:datenmodell in 3220) [ClassicSimilarity], result of:
            0.11866352 = score(doc=3220,freq=4.0), product of:
              0.19304088 = queryWeight, product of:
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.02453417 = queryNorm
              0.6147067 = fieldWeight in 3220, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3220)
          0.016620208 = weight(_text_:22 in 3220) [ClassicSimilarity], result of:
            0.016620208 = score(doc=3220,freq=2.0), product of:
              0.085914485 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02453417 = queryNorm
              0.19345059 = fieldWeight in 3220, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3220)
      0.14285715 = coord(2/14)
    
    Abstract
    Die September/Oktober-Ausgabe 2009 der IWP ist eine Schwerpunktausgabe zum Records Management. Es ist interessant, dass einmal aus fachlich ganz anderer Perspektive auf diese Management- Disziplin geschaut wird. Viele Aspekte werden angesprochen: Terminologie, Rolle des Archivwesens, Interdisziplinarität, Langzeitaufbewahrung und Standardisierung. Im Artikel "Wissensorganisation und Records Management. Was ist der 'state of the art'?" steht die Wissensorganisation als Schwachstelle des Records Management zentral. Dies zu Recht: Das logische Datenmodell von DOMEA - das Gleiche gilt für GEVER und ELAK - entspricht beispielsweise nicht in allen Hinsichten der Geschäftsrealität. Daraus entstehen für die Mitarbeitenden im Arbeitsalltag öfters mehr Verständnisprobleme als sie bewältigen können oder wollen. Die systemische Unterstützung der eingesetzten EDRMS (nicht alle Produkte verdienen übrigens diesen Namen) wird dadurch geschwächt. Die Wissensorganisation genügt in vielen Fällen (noch) nicht. Das Problem liegt allerdings weniger bei der Klassifikation (Aktenplan), wie Ulrike Spree meint. Auch hier kommen Anomalien vor. Ein Ordnungssystem im Records Management umfasst mehr als nur die Klassifikation. Zudem dürfen die prinzipiellen, inhärenten Unterschiede zwischen Records Management einerseits und Wissens- und Informationsmanagement andererseits nicht vergessen gehen. Nicht die Klassifikation ist beim Records Management das zentrale Werkzeug der Informationsrepräsentation und -organisation, sondern die saubere Dossierbildung und die stringente, strukturstabile Umsetzung davon im Datenmodell. Hierauf geht die Autorin nicht ein. Ich werde aus dieser Sicht auf ihren Beitrag in der Schwerpunktausgabe reagieren.
    Date
    6.12.2009 17:22:17
    Source
    Information - Wissenschaft und Praxis. 60(2009) H.8, S.443-446
  7. Merve, N. v.d.: ¬The integration of document image processing and text retrieval principles (1993) 0.02
    0.018365951 = product of:
      0.08570777 = sum of:
        0.030421399 = weight(_text_:system in 6564) [ClassicSimilarity], result of:
          0.030421399 = score(doc=6564,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 6564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6564)
        0.006682779 = weight(_text_:information in 6564) [ClassicSimilarity], result of:
          0.006682779 = score(doc=6564,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 6564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6564)
        0.04860359 = weight(_text_:retrieval in 6564) [ClassicSimilarity], result of:
          0.04860359 = score(doc=6564,freq=12.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.6549133 = fieldWeight in 6564, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6564)
      0.21428572 = coord(3/14)
    
    Abstract
    Only 10% of information used by an organisation is in electronic form hence the need to examine ways of processing and loading paper documents automatically in an electronic database. Discusses the principles of a document image processing (DIP) system; the difference between text retrieval and DIP; text retrieval systems and concept retrieval. Describes the TOPIC intelligent text retrieval system based on concept retrieval. Covers the TYPO operator developed for mispellings, character transpositions and 'dirty' text retrieved as output form OCR processes. Refers to an electronic news clipping service application
  8. Boeri, R.J.; Hensel, M.: Set up a winning text retrieval system : carefully (1995) 0.02
    0.015908616 = product of:
      0.07424021 = sum of:
        0.030421399 = weight(_text_:system in 2809) [ClassicSimilarity], result of:
          0.030421399 = score(doc=2809,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2809)
        0.009450877 = weight(_text_:information in 2809) [ClassicSimilarity], result of:
          0.009450877 = score(doc=2809,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.21943474 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2809)
        0.03436793 = weight(_text_:retrieval in 2809) [ClassicSimilarity], result of:
          0.03436793 = score(doc=2809,freq=6.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.46309367 = fieldWeight in 2809, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2809)
      0.21428572 = coord(3/14)
    
    Abstract
    Considers some of the practical issues involved when a company plans to develop an in house computerized document management system: conversion of paper to electronic form via optical character recognition (OCR) or rekeying; coding of document elements using SGML; indexing for information searching and retrieval (including proximity searching); and hybrid CD-ROM and online information retrieval systems
  9. Vasudevan, M.C.; Mohan, M.; Kapoor, A.: Information system for knowledge management in the specialized division of a hospital (2006) 0.01
    0.014856391 = product of:
      0.06932982 = sum of:
        0.037644558 = weight(_text_:system in 1499) [ClassicSimilarity], result of:
          0.037644558 = score(doc=1499,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.4871716 = fieldWeight in 1499, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
        0.014323224 = weight(_text_:information in 1499) [ClassicSimilarity], result of:
          0.014323224 = score(doc=1499,freq=12.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3325631 = fieldWeight in 1499, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
        0.017362041 = weight(_text_:retrieval in 1499) [ClassicSimilarity], result of:
          0.017362041 = score(doc=1499,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23394634 = fieldWeight in 1499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1499)
      0.21428572 = coord(3/14)
    
    Abstract
    Information systems are essential support for knowledge management in all types of enterprises. This paper describes the evolution and development of a specialized hospital information system. The system is designed to integrate for access and retrieval from databases of patients' case records, and related images - CATSCAN, MRI, X-Ray - and to enable online access to full text of relevant papers on the Internet/WWW. The generation of information products and services from the system is briefly described.
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Theme
    Information Resources Management
  10. Ashford, J.H.: Full text retrieval in document management : a review (1995) 0.01
    0.01454542 = product of:
      0.06787863 = sum of:
        0.021511177 = weight(_text_:system in 2054) [ClassicSimilarity], result of:
          0.021511177 = score(doc=2054,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.27838376 = fieldWeight in 2054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2054)
        0.006682779 = weight(_text_:information in 2054) [ClassicSimilarity], result of:
          0.006682779 = score(doc=2054,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 2054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2054)
        0.03968467 = weight(_text_:retrieval in 2054) [ClassicSimilarity], result of:
          0.03968467 = score(doc=2054,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.5347345 = fieldWeight in 2054, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2054)
      0.21428572 = coord(3/14)
    
    Abstract
    Full text management which applied to document management tends to be centred on text storage and retrieval. Recent developments are concerned with integration with relational database management system products to deliver document management services offering both the flexibility of text retrieval and the ability to support process based funnctions. There has been a move towards client server architectures, more user friendly user interfaces and more flexible and easier to understand retrieval. Advocates caution in choosing tasks for full text methods. Identifies document management functions for which the combined use of database management systems or special purpose tools should be considered
    Source
    Information management and technology. 28(1995) no.1, S.28-32
  11. Pritchard, J.A.T.: Integrated text and image management (1991) 0.01
    0.014177256 = product of:
      0.06616053 = sum of:
        0.02688897 = weight(_text_:system in 3145) [ClassicSimilarity], result of:
          0.02688897 = score(doc=3145,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3479797 = fieldWeight in 3145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=3145)
        0.014468643 = weight(_text_:information in 3145) [ClassicSimilarity], result of:
          0.014468643 = score(doc=3145,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3359395 = fieldWeight in 3145, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3145)
        0.024802918 = weight(_text_:retrieval in 3145) [ClassicSimilarity], result of:
          0.024802918 = score(doc=3145,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.33420905 = fieldWeight in 3145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3145)
      0.21428572 = coord(3/14)
    
    Abstract
    An important recent development in information retrieval is the integration of several technologies (images, text, graphical data) into powerful, user friendly, text and image multimedia information systems. Provides examples of selected commercial developments such as the Topic System (by Verity, California)
    Source
    Information management report. 1991, Dec., S.14-16
  12. Black, K.: ELISE: an online image retrieval system (1993) 0.01
    0.013964031 = product of:
      0.065165475 = sum of:
        0.030421399 = weight(_text_:system in 6631) [ClassicSimilarity], result of:
          0.030421399 = score(doc=6631,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 6631, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6631)
        0.006682779 = weight(_text_:information in 6631) [ClassicSimilarity], result of:
          0.006682779 = score(doc=6631,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 6631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6631)
        0.028061297 = weight(_text_:retrieval in 6631) [ClassicSimilarity], result of:
          0.028061297 = score(doc=6631,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 6631, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6631)
      0.21428572 = coord(3/14)
    
    Abstract
    Research at De Montfort Univ., Division of Learning Development, in Leicester, is focused on promoting the idea of the electronic library. Describes the Electronic Library Image Service for Europe (ELISE) project funded by the Commission for the European Communities, its overall aim and lists the 5 main challenges for the project team which include: identifying image bank technical requirements; exploring storage and retrieval mechanisms; exploring client needs and design user interfaces; the production of a pilot system; and devising a model for the international interconnection of systems
    Source
    Aslib information. 21(1993) nos.7/8, S.293-295
  13. Masiero, P.C.: Authoring and searching in dynamically growing hypertext databases (1994) 0.01
    0.013964031 = product of:
      0.065165475 = sum of:
        0.030421399 = weight(_text_:system in 1575) [ClassicSimilarity], result of:
          0.030421399 = score(doc=1575,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 1575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=1575)
        0.006682779 = weight(_text_:information in 1575) [ClassicSimilarity], result of:
          0.006682779 = score(doc=1575,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 1575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1575)
        0.028061297 = weight(_text_:retrieval in 1575) [ClassicSimilarity], result of:
          0.028061297 = score(doc=1575,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 1575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1575)
      0.21428572 = coord(3/14)
    
    Abstract
    Shows how an application in office information systems can be modelled so that a dynamically growing database of hypertext documents is created and automatically extended, as well as easily searched. Proposes a method for analyzing office applications which relies on a model based on statecharts to record the flow of documents within the system. A prototype implementation is described of a hypertext system to support the creation, storage and retrieval of documents associated with formal face to face meetings. Special features to be incorporated into hypertext systems aimed at supporting the storage and retrieval of office documents are also identified
  14. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.01
    0.013774462 = product of:
      0.06428082 = sum of:
        0.022816047 = weight(_text_:system in 6386) [ClassicSimilarity], result of:
          0.022816047 = score(doc=6386,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.29527056 = fieldWeight in 6386, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
        0.0050120843 = weight(_text_:information in 6386) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=6386,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 6386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
        0.036452696 = weight(_text_:retrieval in 6386) [ClassicSimilarity], result of:
          0.036452696 = score(doc=6386,freq=12.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.49118498 = fieldWeight in 6386, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
      0.21428572 = coord(3/14)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    nfd Information - Wissenschaft und Praxis. 52(2001) H.5, S.251-262
  15. Koulopoulos, T.M.; Frappaolo, C.: Electronic document management systems : where are they today? (1993) 0.01
    0.013605518 = product of:
      0.06349242 = sum of:
        0.018822279 = weight(_text_:system in 982) [ClassicSimilarity], result of:
          0.018822279 = score(doc=982,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.2435858 = fieldWeight in 982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.0058474317 = weight(_text_:information in 982) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=982,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.038822707 = weight(_text_:retrieval in 982) [ClassicSimilarity], result of:
          0.038822707 = score(doc=982,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.5231199 = fieldWeight in 982, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
      0.21428572 = coord(3/14)
    
    Abstract
    Reports results of a market study to determine the changes that have taken place in text retrieval and imaging systems: the 2 dominant components of electronic document management systems (EDMS). Organizations are focusing on integrated technologies, a sign that imaging and text retrieval are making their way towards the mainstream of information management. Reports data for: text retrieval market revenue by customer segment (industry, government and library); components of an integrated image based EDMS; component platforms of the text retreival system (PC, Macintosh atc.); areas of improvement for current imaging systems; importance of key benefits to implementing text retrieval within the organization; and areas of improvement for current text retrieval systems
  16. Rosman, G.; Meer, K.v.d.; Sol, H.G.: ¬The design of document information systems (1996) 0.01
    0.012903456 = product of:
      0.060216125 = sum of:
        0.02688897 = weight(_text_:system in 7750) [ClassicSimilarity], result of:
          0.02688897 = score(doc=7750,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3479797 = fieldWeight in 7750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=7750)
        0.016706947 = weight(_text_:information in 7750) [ClassicSimilarity], result of:
          0.016706947 = score(doc=7750,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.38790947 = fieldWeight in 7750, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7750)
        0.016620208 = product of:
          0.033240415 = sum of:
            0.033240415 = weight(_text_:22 in 7750) [ClassicSimilarity], result of:
              0.033240415 = score(doc=7750,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.38690117 = fieldWeight in 7750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7750)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Discusses the costs and benefits of documents information systems (involving text and images) and some design methodological aspects that arise from the documentary nature of the data. Reports details of a case study involving a specific document information system introduced at Press Ltd, a company in the Netherlands
    Source
    Journal of information science. 22(1996) no.4, S.287-297
  17. Mateika, O,: Feasibility-Studie zur Eignung der Pressedatenbank Archimedes zum Einsatz in der Pressedokumentation des Norddeutschen Rundfunks (2004) 0.01
    0.012795988 = product of:
      0.059714608 = sum of:
        0.030421399 = weight(_text_:system in 3712) [ClassicSimilarity], result of:
          0.030421399 = score(doc=3712,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 3712, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3712)
        0.009450877 = weight(_text_:information in 3712) [ClassicSimilarity], result of:
          0.009450877 = score(doc=3712,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.21943474 = fieldWeight in 3712, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3712)
        0.019842334 = weight(_text_:retrieval in 3712) [ClassicSimilarity], result of:
          0.019842334 = score(doc=3712,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.26736724 = fieldWeight in 3712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3712)
      0.21428572 = coord(3/14)
    
    Abstract
    Das Datenbanksystem Planet, derzeit eingesetzt als Information Retrieval System in Pressearchiven innerhalb des SAD-Verbunds der ARD, soll durch ein mindestens gleichwertiges System abgelöst werden. Archimedes, derzeit eingesetzt im Dokumentationsbereich des Westdeutschen Rundfunks Köln, ist eine mögliche Alternative. Ob es die Vorgaben und Anforderungen erfüllt, wird mit Hilfe einer Feasibility-Studie geprüft, notwendige Funktionalitäten und strategisch-qualitative Anforderungen bewertet.
    Imprint
    Hamburg : Hochschule für Angewandte Wissenschaften, FB Bibliothek und Information
  18. Alexander, J.: Customs and excise process 2.5 million documents (1997) 0.01
    0.0122028245 = product of:
      0.056946512 = sum of:
        0.030421399 = weight(_text_:system in 2427) [ClassicSimilarity], result of:
          0.030421399 = score(doc=2427,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 2427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2427)
        0.006682779 = weight(_text_:information in 2427) [ClassicSimilarity], result of:
          0.006682779 = score(doc=2427,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 2427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2427)
        0.019842334 = weight(_text_:retrieval in 2427) [ClassicSimilarity], result of:
          0.019842334 = score(doc=2427,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.26736724 = fieldWeight in 2427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2427)
      0.21428572 = coord(3/14)
    
    Abstract
    The HM Customs and Excise operation in Salford, Manchster, UK, has installed an electronic document management system from Graphic Data to streamline handling of import entries. It aims was to reduce filing and storage and improve access to documentation. The system involves scanning documents and CD storage and retrieval. Because of legal admissibility issues, documentation is retained in its paper format in deep storage
    Source
    Information management and technology. 30(1997) no.6, s.280-281
  19. Lam-Adesina, A.M.; Jones, G.J.F.: Examining and improving the effectiveness of relevance feedback for retrieval of scanned text documents (2006) 0.01
    0.011462139 = product of:
      0.053489983 = sum of:
        0.013444485 = weight(_text_:system in 977) [ClassicSimilarity], result of:
          0.013444485 = score(doc=977,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
        0.0072343214 = weight(_text_:information in 977) [ClassicSimilarity], result of:
          0.0072343214 = score(doc=977,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16796975 = fieldWeight in 977, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
        0.032811176 = weight(_text_:retrieval in 977) [ClassicSimilarity], result of:
          0.032811176 = score(doc=977,freq=14.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.442117 = fieldWeight in 977, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
      0.21428572 = coord(3/14)
    
    Abstract
    Important legacy paper documents are digitized and collected in online accessible archives. This enables the preservation, sharing, and significantly the searching of these documents. The text contents of these document images can be transcribed automatically using OCR systems and then stored in an information retrieval system. However, OCR systems make errors in character recognition which have previously been shown to impact on document retrieval behaviour. In particular relevance feedback query-expansion methods, which are often effective for improving electronic text retrieval, are observed to be less reliable for retrieval of scanned document images. Our experimental examination of the effects of character recognition errors on an ad hoc OCR retrieval task demonstrates that, while baseline information retrieval can remain relatively unaffected by transcription errors, relevance feedback via query expansion becomes highly unstable. This paper examines the reason for this behaviour, and introduces novel modifications to standard relevance feedback methods. These methods are shown experimentally to improve the effectiveness of relevance feedback for errorful OCR transcriptions. The new methods combine similar recognised character strings based on term collection frequency and a string edit-distance measure. The techniques are domain independent and make no use of external resources such as dictionaries or training data.
    Source
    Information processing and management. 42(2006) no.3, S.633-649
  20. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.01
    0.01091156 = product of:
      0.050920613 = sum of:
        0.019013375 = weight(_text_:system in 5863) [ClassicSimilarity], result of:
          0.019013375 = score(doc=5863,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.24605882 = fieldWeight in 5863, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
        0.004176737 = weight(_text_:information in 5863) [ClassicSimilarity], result of:
          0.004176737 = score(doc=5863,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 5863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
        0.027730504 = weight(_text_:retrieval in 5863) [ClassicSimilarity], result of:
          0.027730504 = score(doc=5863,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37365708 = fieldWeight in 5863, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
      0.21428572 = coord(3/14)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt

Years

Languages

  • e 124
  • d 68
  • f 4
  • sp 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 165
  • m 16
  • x 11
  • s 6
  • r 3
  • el 1
  • More… Less…