Search (3878 results, page 1 of 194)

  • × type_ss:"a"
  1. Staud, J.L.: Datenbanken entwerfen mit dem ER-Modell (1995) 0.10
    0.10434585 = product of:
      0.31303754 = sum of:
        0.08554042 = weight(_text_:relationship in 1393) [ClassicSimilarity], result of:
          0.08554042 = score(doc=1393,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3731459 = fieldWeight in 1393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1393)
        0.22749713 = weight(_text_:datenmodell in 1393) [ClassicSimilarity], result of:
          0.22749713 = score(doc=1393,freq=2.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.60852855 = fieldWeight in 1393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1393)
      0.33333334 = coord(2/6)
    
    Abstract
    Datenbanken aufbauen - eine Aufgabe, der wir uns immer wieder gegenüber sehen, und sei es auch nur, um bestehende Datenbanken an die Veränderungen der dynamischen Welt anzupassen. Der erste Schritt beim Datenbankaufbau ist die Erstellung eines Datenmodells, genauer: eines konzeptionellen Datenmodells. Dieses stellt ein Abbild des zu betrachtenden Anwendungsbereichs dar (des Weltausschnitts) und wird mit einem Instrumentarium erstellt, das einerseits möglichst viel von den Strukturen, Abläufen, Regeln, usw. des Weltaussschnits erfaßt und das andererseits in ein 'physisches Modell' umgesetzt werden kann. Neben dem Relationalen Datenmodell ist der am meisten verbreitete Ansatz zur Modellierung von Weltausschnitten für Datenbanken der Entity- Relationship Ansatz. Die mit ihm erstellten Modelle werden ER-Modelle genannt. Einführung und Beispiele
  2. Schlenkrich, C.: Aspekte neuer Regelwerksarbeit : Multimediales Datenmodell für ARD und ZDF (2003) 0.10
    0.0952488 = product of:
      0.2857464 = sum of:
        0.2599967 = weight(_text_:datenmodell in 1515) [ClassicSimilarity], result of:
          0.2599967 = score(doc=1515,freq=8.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.6954612 = fieldWeight in 1515, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.03125 = fieldNorm(doc=1515)
        0.025749695 = weight(_text_:22 in 1515) [ClassicSimilarity], result of:
          0.025749695 = score(doc=1515,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.15476047 = fieldWeight in 1515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1515)
      0.33333334 = coord(2/6)
    
    Abstract
    Wir sind mitten in der Arbeit, deshalb kann ich Ihnen nur Arbeitsstände weitergeben. Es ist im Fluss, und wir bemühen uns in der Tat, die "alten Regelwerke" fit zu machen und sie für den Multimediabereich aufzuarbeiten. Ganz kurz zur Arbeitsgruppe: Sie entstammt der AG Orgatec, der Schall- und Hörfunkarchivleiter- und der Fernseharchivleiterkonferenz zur Erstellung eines verbindlichen multimedialen Regelwerks. Durch die Digitalisierung haben sich die Aufgaben in den Archivbereichen eindeutig geändert. Wir versuchen, diese Prozesse abzufangen, und zwar vom Produktionsprozess bis hin zur Archivierung neu zu regeln und neu zu definieren. Wir haben mit unserer Arbeit begonnen im April letzten Jahres, sind also jetzt nahezu exakt ein Jahr zugange, und ich werde Ihnen im Laufe des kurzen Vortrages berichten können, wie wir unsere Arbeit gestaltet haben. Etwas zu den Mitgliedern der Arbeitsgruppe - ich denke, es ist ganz interessant, einfach mal zu sehen, aus welchen Bereichen und Spektren unsere Arbeitsgruppe sich zusammensetzt. Wir haben also Vertreter des Bayrischen Rundfunks, des Norddeutschen -, des Westdeutschen Rundfunks, des Mitteldeutschen von Ost nach West, von Süd nach Nord und aus den verschiedensten Arbeitsbereichen von Audio über Video bis hin zu Online- und Printbereichen. Es ist eine sehr bunt gemischte Truppe, aber auch eine hochspannenden Diskussion exakt eben aufgrund der Vielfalt, die wir abbilden wollen und abbilden müssen. Die Ziele: Wir wollen verbindlich ein multimediales Datenmodell entwickeln und verabschieden, was insbesondere den digitalen Produktionscenter und Archiv-Workflow von ARD und - da haben wir uns besonders gefreut - auch in guter alter Tradition in gemeinsamer Zusammenarbeit mit dem ZDF bildet. Wir wollen Erfassungs- und Erschließungsregeln definieren. Wir wollen Mittlerdaten generieren und bereitstellen, um den Produktions-Workflow abzubilden und zu gewährleisten, und das Datenmodell, das wir uns sozusagen als Zielstellung definiert haben, soll für den Programmaustausch Grundlagen schaffen, damit von System zu System intern und extern kommuniziert werden kann. Nun könnte man meinen, dass ein neues multimediales Datenmodell aus einem Mix der alten Regelwerke Fernsehen, Wort und Musik recht einfach zusammenzuführen sei. Man stellt einfach die Datenlisten der einzelnen Regelwerke synoptisch gegenüber, klärt Gemeinsames und Spezifisches ab, ergänzt Fehlendes, eliminiert eventuell nicht Benötigtes und stellt es einfach neu zusammen, fertig ist das neue Regelwerk. Leider ist es nicht ganz so einfach, denn es gibt dabei doch eine ganze Reihe von Aspekten zu berücksichtigen, die eine vorgelagerte Abstraktionsebene auch zwingend erforderlich machen.
    Date
    22. 4.2003 12:05:56
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.09
    0.0883389 = product of:
      0.26501667 = sum of:
        0.22639212 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
          0.22639212 = score(doc=562,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.03862454 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.03862454 = score(doc=562,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  4. Toebak, P.: ¬Das Dossier nicht die Klassifikation als Herzstück des Records Management (2009) 0.09
    0.08733132 = product of:
      0.26199394 = sum of:
        0.22980681 = weight(_text_:datenmodell in 3220) [ClassicSimilarity], result of:
          0.22980681 = score(doc=3220,freq=4.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.6147067 = fieldWeight in 3220, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3220)
        0.03218712 = weight(_text_:22 in 3220) [ClassicSimilarity], result of:
          0.03218712 = score(doc=3220,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.19345059 = fieldWeight in 3220, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3220)
      0.33333334 = coord(2/6)
    
    Abstract
    Die September/Oktober-Ausgabe 2009 der IWP ist eine Schwerpunktausgabe zum Records Management. Es ist interessant, dass einmal aus fachlich ganz anderer Perspektive auf diese Management- Disziplin geschaut wird. Viele Aspekte werden angesprochen: Terminologie, Rolle des Archivwesens, Interdisziplinarität, Langzeitaufbewahrung und Standardisierung. Im Artikel "Wissensorganisation und Records Management. Was ist der 'state of the art'?" steht die Wissensorganisation als Schwachstelle des Records Management zentral. Dies zu Recht: Das logische Datenmodell von DOMEA - das Gleiche gilt für GEVER und ELAK - entspricht beispielsweise nicht in allen Hinsichten der Geschäftsrealität. Daraus entstehen für die Mitarbeitenden im Arbeitsalltag öfters mehr Verständnisprobleme als sie bewältigen können oder wollen. Die systemische Unterstützung der eingesetzten EDRMS (nicht alle Produkte verdienen übrigens diesen Namen) wird dadurch geschwächt. Die Wissensorganisation genügt in vielen Fällen (noch) nicht. Das Problem liegt allerdings weniger bei der Klassifikation (Aktenplan), wie Ulrike Spree meint. Auch hier kommen Anomalien vor. Ein Ordnungssystem im Records Management umfasst mehr als nur die Klassifikation. Zudem dürfen die prinzipiellen, inhärenten Unterschiede zwischen Records Management einerseits und Wissens- und Informationsmanagement andererseits nicht vergessen gehen. Nicht die Klassifikation ist beim Records Management das zentrale Werkzeug der Informationsrepräsentation und -organisation, sondern die saubere Dossierbildung und die stringente, strukturstabile Umsetzung davon im Datenmodell. Hierauf geht die Autorin nicht ein. Ich werde aus dieser Sicht auf ihren Beitrag in der Schwerpunktausgabe reagieren.
    Date
    6.12.2009 17:22:17
  5. Xiao, G.: ¬A knowledge classification model based on the relationship between science and human needs (2013) 0.07
    0.07462993 = product of:
      0.2238898 = sum of:
        0.14664072 = weight(_text_:relationship in 138) [ClassicSimilarity], result of:
          0.14664072 = score(doc=138,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.6396787 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.09375 = fieldNorm(doc=138)
        0.07724908 = weight(_text_:22 in 138) [ClassicSimilarity], result of:
          0.07724908 = score(doc=138,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.46428138 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=138)
      0.33333334 = coord(2/6)
    
    Date
    22. 2.2013 12:36:34
  6. Bostian, R.; Robbins, A.: Effective instruction for searching CD-ROM indexes (1990) 0.06
    0.062191613 = product of:
      0.18657483 = sum of:
        0.12220059 = weight(_text_:relationship in 7552) [ClassicSimilarity], result of:
          0.12220059 = score(doc=7552,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.53306556 = fieldWeight in 7552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.078125 = fieldNorm(doc=7552)
        0.06437424 = weight(_text_:22 in 7552) [ClassicSimilarity], result of:
          0.06437424 = score(doc=7552,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.38690117 = fieldWeight in 7552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=7552)
      0.33333334 = coord(2/6)
    
    Abstract
    Describes an experiment that examined the relationship between successful searching of CD-ROM databases by undergraduate students and various types of instruction provided by the library staff. The findings indicate that the only level of instruction that resulted in a significant difference was a live demonstration of searches.
    Date
    21. 3.2008 13:22:03
  7. Kaiser, A.: Zeitbezogene Datenbanksysteme : eine Bestandsaufnahme und ausgewählte Problemstellungen (1996) 0.06
    0.061281815 = product of:
      0.3676909 = sum of:
        0.3676909 = weight(_text_:datenmodell in 6121) [ClassicSimilarity], result of:
          0.3676909 = score(doc=6121,freq=4.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.9835307 = fieldWeight in 6121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0625 = fieldNorm(doc=6121)
      0.16666667 = coord(1/6)
    
    Abstract
    Der vorliegende Beitrag spezifiziert die Anforderungen, die an zeitbezogene Datenbanksysteme gestellt werden. Ausgehend von diesen Anforderungen wird untersucht, inwieweit diese mit dem relationalen Datenmodell und Standard-SQL (SQL2) zu erfüllen sind. Daran anschließend werden ein temporal erweitertes Datenmodell (BCDM) und die darauf basierende Datenbanksprache TSQL2 vorgestellt. Nach einem Vergleich der beiden Sprachen SQL2 udn TSQL2 anhand eines Beispiels wird abschließend auf einige Problemgebiete bei zeitbezogenen Datenbanksystemen näher eingegangen
  8. Weisbrod, D.: Pflichtablieferung von Dissertationen mit Forschungsdaten an die DNB : Anlagerungsformen und Datenmodell (2018) 0.06
    0.056290947 = product of:
      0.33774567 = sum of:
        0.33774567 = weight(_text_:datenmodell in 4352) [ClassicSimilarity], result of:
          0.33774567 = score(doc=4352,freq=6.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.9034307 = fieldWeight in 4352, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.046875 = fieldNorm(doc=4352)
      0.16666667 = coord(1/6)
    
    Abstract
    Im Rahmen des DFG-Projektes "Elektronische Dissertationen Plus" (eDissPlus) entwickeln die Humboldt-Universität zu Berlin (HU) und die Deutsche Nationalbibliothek (DNB) Lösungen für eine zeitgemäße Archivierung und Publikation von Forschungsdaten, die im Zusammenhang mit Promotionsvorhaben entstehen. Dabei müssen die unterschiedlichen Anlagerungsformen von Forschungsdaten an eine Dissertation berücksichtigt und in einem Datenmodell abgebildet sowie das von der DNB verwendete Metadatenschema XMetaDissPlus überarbeitet werden. Das ist notwendig, um die Relationen zwischen der Dissertation und den abgelieferten Forschungsdaten-Supplementen sowie den Daten, die auf externen Repositorien verbleiben sollen, nachzuweisen und im Katalog der DNB recherchierbar zu machen. Dieser Beitrag stellt das Datenmodell und die Änderungen im Metadatenschema vor.
  9. O'Neill, E.T.: FRBR: Functional requirements for bibliographic records application of the entity-relationship model to Humphry Clinker (2002) 0.05
    0.051462572 = product of:
      0.15438771 = sum of:
        0.12220059 = weight(_text_:relationship in 2434) [ClassicSimilarity], result of:
          0.12220059 = score(doc=2434,freq=8.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.53306556 = fieldWeight in 2434, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2434)
        0.03218712 = weight(_text_:22 in 2434) [ClassicSimilarity], result of:
          0.03218712 = score(doc=2434,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.19345059 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2434)
      0.33333334 = coord(2/6)
    
    Abstract
    The report from the IFLA (International Federation of Library Associations and Institutions) Study Group on the Functional Requirements for Bibliographic Records (FRBR) recommended a new approach to cataloging based on an entity-relationship model. This study examined a single work, The Expedition of Humphry Clinker, to determine benefits and drawbacks associated with creating such an entity-relationship model. Humphry Clinker was selected for several reasons - it has been previously studied, it is widely held, and it is a work of mid-level complexity. In addition to analyzing the bibliographic records, many books were examined to ensure the accuracy of the resulting FRBR model. While it was possible to identify works and manifestations, identifying expressions was problematic. Reliable identification of expressions frequently necessitated the examination of the books themselves. Enhanced manifestation records where the roles of editors, illustrators, translators, and other contributors are explicitly identified may be a viable alternative to expressions. For Humphry Clinker, the enhanced record approach avoids the problem of identifying expressions while providing similar functionality. With the enhanced manifestation record, the three remaining entity-relationship structures - works, manifestations, and items - the FRBR model provides a powerful means to improve bibliographic organization and navigation.
    Date
    10. 9.2000 17:38:22
  10. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.05
    0.05030936 = product of:
      0.30185616 = sum of:
        0.30185616 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
          0.30185616 = score(doc=140,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  11. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.05030936 = product of:
      0.30185616 = sum of:
        0.30185616 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.30185616 = score(doc=230,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.16666667 = coord(1/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  12. Callahan, P.F.: ISBD(S) revised edition and AACR2 1988 revision : a comparison (1992) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 5993) [ClassicSimilarity], result of:
          0.097760476 = score(doc=5993,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 5993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=5993)
        0.05149939 = weight(_text_:22 in 5993) [ClassicSimilarity], result of:
          0.05149939 = score(doc=5993,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 5993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5993)
      0.33333334 = coord(2/6)
    
    Abstract
    Article appearing as part of an issue devoted to the theme, Serials Cataloguing: Modern Perspectives and International Developments. Pt.2. In 1988, a revision of AACR2 and a revised edition of the ISBD for serials were published. Discusses and compares the origins of theses 2 standards and their relationship. Describes the inconsistencies between the 2 texts and evaluates their compatibility. Concludes that there is a high degree of compatability on major points but that relatively little progress has been made since the original editions in reducing the substantial number of minor differences
    Source
    Serials librarian. 22(1992) no.3/4, S.249-262
  13. Bovey, J.D.: Event-based personal retrieval (1996) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 7704) [ClassicSimilarity], result of:
          0.097760476 = score(doc=7704,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 7704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=7704)
        0.05149939 = weight(_text_:22 in 7704) [ClassicSimilarity], result of:
          0.05149939 = score(doc=7704,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 7704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=7704)
      0.33333334 = coord(2/6)
    
    Abstract
    People who work in a research, academic or business environemtn often have personal information collections which are large enough to need retrieval aids. A major difference between personal information retrieval and standard information retrieval is that the items to be retrieved are often associated with events in the searcher's life and ca be retrieved by their relationship of other events as well as by content. Describes the background to evenet based retrieval and describes a prototype graphical event based retrieval system, developed at Kent University, UK, employing the hive event browser
    Source
    Journal of information science. 22(1996) no.5, S.357-366
  14. Boeder, R.: Database applications for libraries : an introduction (1996) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 340) [ClassicSimilarity], result of:
          0.097760476 = score(doc=340,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=340)
        0.05149939 = weight(_text_:22 in 340) [ClassicSimilarity], result of:
          0.05149939 = score(doc=340,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=340)
      0.33333334 = coord(2/6)
    
    Abstract
    Overviews database applications in libraries. Explains the 2 basic types of databases, flat-file and relational, outlines the uses and advantages of relationship systems. Librarians can utilise a number of software packages for database management and design a database in cooperation with a programmer. The librarian needs to be involved in the conceptual and external level of database design. Offers advice on finding a database designer. Outlines ideas for library related applications of database software
    Source
    Colorado libraries. 22(1996) no.1, S.25-28
  15. André, A.-S.: ¬L'¬information culturelle : acteurs, usages et enjeux pour les professionels de l'information (1997) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 885) [ClassicSimilarity], result of:
          0.097760476 = score(doc=885,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=885)
        0.05149939 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
          0.05149939 = score(doc=885,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=885)
      0.33333334 = coord(2/6)
    
    Abstract
    A summary of a thesis based on the supposition that in an era of increasing leisure more time is to be available for cultural activities and that analysis can lead to a better grasp of the concept of cultural information. Discusses: the relationship of government, cultural networks and the cultural engineering sector to information; existing kinds of cultural information, their use and the impact on them of new technologies; and the characteristics and role of information professionals in this sector
    Date
    1. 8.1996 22:01:00
  16. Regimbeau, G.: Acces thématiques aux oeuvres d'art contemporaines dans les banques de données (1998) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 2237) [ClassicSimilarity], result of:
          0.097760476 = score(doc=2237,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 2237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=2237)
        0.05149939 = weight(_text_:22 in 2237) [ClassicSimilarity], result of:
          0.05149939 = score(doc=2237,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 2237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2237)
      0.33333334 = coord(2/6)
    
    Abstract
    Discusses the possibilities and difficulties encountered when using a thematic index to search contemporary art databanks. Jaconde and Videomuseum, 2 French databanks, are used as examples. the core problems found in the study are the methods and limits of indexing in both systems. A thematic index should be developed that is better adapted to 20th century art, based on the complementary and reciprocal relationship between text and image, and which fully exploits hypertext
    Date
    1. 8.1996 22:01:00
  17. Dempsey, L.: ¬The subject gateway : experiences and issues based on the emergence of the Resource Discovery Network (2000) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 628) [ClassicSimilarity], result of:
          0.097760476 = score(doc=628,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=628)
        0.05149939 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
          0.05149939 = score(doc=628,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=628)
      0.33333334 = coord(2/6)
    
    Abstract
    Charts the history and development of the UK's Resource Discovery Network, which brings together under a common business, technical and service framework a range of subject gateways and other services for the academic and research community. Considers its future relationship to other services, and position within the information ecology
    Date
    22. 6.2002 19:36:13
  18. Carini, P.; Shepherd, K.: ¬The MARC standard and encoded archival description (2004) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 2830) [ClassicSimilarity], result of:
          0.097760476 = score(doc=2830,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 2830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=2830)
        0.05149939 = weight(_text_:22 in 2830) [ClassicSimilarity], result of:
          0.05149939 = score(doc=2830,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 2830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2830)
      0.33333334 = coord(2/6)
    
    Abstract
    This case study details the evolution of descriptive practices and standards used in the Mount Holyoke College Archives and the Five College Finding Aids Access Project, discusses the relationship of Encoded Archival Description (EAD) and the MARC standard in reference to archival description, and addresses the challenges and opportunities of transferring data from one metadata standard to another. The study demonstrates that greater standardization in archival description allows archivists to respond more effectively to technological change.
    Source
    Library hi tech. 22(2004) no.1, S.18-27
  19. Hillmann, D.I.: "Parallel universes" or meaningful relationships : envisioning a future for the OPAC and the net (1996) 0.05
    0.04975329 = product of:
      0.14925987 = sum of:
        0.097760476 = weight(_text_:relationship in 5581) [ClassicSimilarity], result of:
          0.097760476 = score(doc=5581,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 5581, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=5581)
        0.05149939 = weight(_text_:22 in 5581) [ClassicSimilarity], result of:
          0.05149939 = score(doc=5581,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.30952093 = fieldWeight in 5581, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5581)
      0.33333334 = coord(2/6)
    
    Abstract
    Over the past year, innumerable discussions on the relationship between traditional library OPACs and the newly burgeoning World WideWeb have occured in many libraries and in virtually every library related discussion list. Rumors and speculation abound, some insisting that SGML will replace USMARC "soon," others maintaining that OPACs that haven't migrated to the Web will go the way of the dinosaurs.
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.97-103
  20. Neuer internationaler Standard für Thesauri veröffentlicht (2012) 0.05
    0.04596136 = product of:
      0.27576816 = sum of:
        0.27576816 = weight(_text_:datenmodell in 183) [ClassicSimilarity], result of:
          0.27576816 = score(doc=183,freq=4.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.737648 = fieldWeight in 183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.046875 = fieldNorm(doc=183)
      0.16666667 = coord(1/6)
    
    Abstract
    ISO 25964-1 ist der neue internationale Standard für Thesauri. Die Norm ersetzt ISO 2788 und ISO 5964. Erschienen unter dem vollständigen Titel "Information and documentation -Thesauri and interoperability with other vocabularies - Part 1: Thesauri for information retrieval" berücksichtigt ISO 25964-1 einsprachige und mehrsprachige Thesauri sowie die heutigen Anforderungen an Interoperabilität, Networking und die gemeinsame Nutzung von Daten (Data Sharing). Im Standard sind folgende Themen enthalten: - Erstellung einsprachiger und mehrsprachiger Thesauri; - Klärung des Unterschieds zwischen Benennungen und Begriffen und ihrer Beziehungen zueinander; - Richtlinien zur Facettenanalyse, zur Gestaltung und Darstellung von Thesauri; - Richtlinien zur Thesaurusnutzung in computergestützten und vernetzten Systemen; Best-Practice-Modell für Management der Thesaurusentwicklung und -wartung; - Leitfaden Thesaurusverwaltungssoftware; - Datenmodell für ein- und mehrsprachige Thesauri; - zusammengefasste Empfehlungen für Austauschformate und -protokolle. Aus dem Datenmodell wurde ein XML-Schema für Datenaustauschzwecke erstellt, das frei verfügbar ist unter http://www.niso.org/schemas/iso25964/.

Languages

Types

  • el 93
  • b 34
  • p 1
  • More… Less…

Themes