Search (274 results, page 1 of 14)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Allen, R.B.: Navigating and searching in digital library catalogs (1994) 0.03
    0.029759422 = product of:
      0.10911788 = sum of:
        0.0070290747 = product of:
          0.014058149 = sum of:
            0.014058149 = weight(_text_:29 in 2414) [ClassicSimilarity], result of:
              0.014058149 = score(doc=2414,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.19432661 = fieldWeight in 2414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2414)
          0.5 = coord(1/2)
        0.047574617 = weight(_text_:lecture in 2414) [ClassicSimilarity], result of:
          0.047574617 = score(doc=2414,freq=2.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.35748336 = fieldWeight in 2414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.032426912 = weight(_text_:notes in 2414) [ClassicSimilarity], result of:
          0.032426912 = score(doc=2414,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.29513517 = fieldWeight in 2414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.0029728229 = weight(_text_:in in 2414) [ClassicSimilarity], result of:
          0.0029728229 = score(doc=2414,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.10626988 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.015173013 = weight(_text_:computer in 2414) [ClassicSimilarity], result of:
          0.015173013 = score(doc=2414,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.20188503 = fieldWeight in 2414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 2414) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=2414,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 2414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2414)
          0.5 = coord(1/2)
      0.27272728 = coord(6/22)
    
    Date
    11. 8.2020 18:29:56
    Series
    Lecture notes in computer science ; Vol. 916
  2. Saeed, H.; Chaudhry, A.S.: Using Dewey decimal classification scheme (DDC) for building taxonomies for knowledge organisation (2002) 0.02
    0.0150300255 = product of:
      0.08266514 = sum of:
        0.0473255 = weight(_text_:informatik in 4461) [ClassicSimilarity], result of:
          0.0473255 = score(doc=4461,freq=2.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.4509992 = fieldWeight in 4461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0625 = fieldNorm(doc=4461)
        0.004756517 = weight(_text_:in in 4461) [ClassicSimilarity], result of:
          0.004756517 = score(doc=4461,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.17003182 = fieldWeight in 4461, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4461)
        0.024276821 = weight(_text_:computer in 4461) [ClassicSimilarity], result of:
          0.024276821 = score(doc=4461,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.32301605 = fieldWeight in 4461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=4461)
        0.0063062985 = product of:
          0.012612597 = sum of:
            0.012612597 = weight(_text_:science in 4461) [ClassicSimilarity], result of:
              0.012612597 = score(doc=4461,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23282544 = fieldWeight in 4461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4461)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    Terms drawn from DDC indexes and IEEE Web Thesaurus were merged with DDC hierarchies to build a taxonomy in the domain of computer science. When displayed as a directory structure using a shareware tool MyInfo, the resultant taxonomy appeared to be a promising tool for categorisation that can facilitate browsing of information resources in an electronic environment.
    Field
    Informatik
  3. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.01
    0.012441888 = product of:
      0.06843039 = sum of:
        0.018941889 = weight(_text_:und in 4189) [ClassicSimilarity], result of:
          0.018941889 = score(doc=4189,freq=16.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.41556883 = fieldWeight in 4189, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4189)
        0.018941889 = weight(_text_:und in 4189) [ClassicSimilarity], result of:
          0.018941889 = score(doc=4189,freq=16.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.41556883 = fieldWeight in 4189, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4189)
        0.0043691397 = weight(_text_:in in 4189) [ClassicSimilarity], result of:
          0.0043691397 = score(doc=4189,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1561842 = fieldWeight in 4189, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4189)
        0.026177472 = sum of:
          0.009459447 = weight(_text_:science in 4189) [ClassicSimilarity], result of:
            0.009459447 = score(doc=4189,freq=2.0), product of:
              0.0541719 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.02056547 = queryNorm
              0.17461908 = fieldWeight in 4189, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=4189)
          0.016718024 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
            0.016718024 = score(doc=4189,freq=2.0), product of:
              0.072016776 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02056547 = queryNorm
              0.23214069 = fieldWeight in 4189, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4189)
      0.18181819 = coord(4/22)
    
    Abstract
    Eine ganze Anzahl von Bibliotheken in Europa befaßt sich mit der Entwicklung von Internet Subject Gateways - einer Serviceleistung, die den Nutzern helfen soll, qualitativ hochwertige Internetquellen zu finden. Subject Gateways wie SOSIG (The Social Science Information Gateway) sind bereits seit einigen Jahren im Internet verfügbar und stellen eine Alternative zu Internet-Suchmaschinen wie AltaVista und Verzeichnissen wie Yahoo dar. Bezeichnenderweise stützen sich Subject Gateways auf die Fertigkeiten, Verfahrensweisen und Standards der internationalen Bibliothekswelt und wenden diese auf Informationen aus dem Internet an. Dieses Referat will daher betonen, daß Bibliothekare/innen idealerweise eine vorherrschende Rolle im Aufbau von Suchservices für Internetquellen spielen und daß Information Gateways eine Möglichkeit dafür darstellen. Es wird einige der Subject Gateway-Initiativen in Europa umreißen und die Werkzeuge und Technologien beschreiben, die vom Projekt DESIRE entwickelt wurden, um die Entwicklung neuer Gateways in anderen Ländern zu unterstützen. Es wird auch erörtert, wie IMesh, eine Gruppe für Gateways aus der ganzen Welt eine internationale Strategie für Gateways anstrebt und versucht, Standards zur Umsetzung dieses Projekts zu entwickeln
    Date
    22. 6.2002 19:35:09
  4. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.011838465 = product of:
      0.052089244 = sum of:
        0.013393938 = weight(_text_:und in 4379) [ClassicSimilarity], result of:
          0.013393938 = score(doc=4379,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.013393938 = weight(_text_:und in 4379) [ClassicSimilarity], result of:
          0.013393938 = score(doc=4379,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.008434889 = product of:
          0.016869778 = sum of:
            0.016869778 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.016869778 = score(doc=4379,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
        0.0050450475 = weight(_text_:in in 4379) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=4379,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.011821429 = product of:
          0.023642858 = sum of:
            0.023642858 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.023642858 = score(doc=4379,freq=4.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  5. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.01
    0.010933134 = product of:
      0.048105787 = sum of:
        0.014617028 = weight(_text_:und in 4406) [ClassicSimilarity], result of:
          0.014617028 = score(doc=4406,freq=28.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.3206851 = fieldWeight in 4406, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
        0.014617028 = weight(_text_:und in 4406) [ClassicSimilarity], result of:
          0.014617028 = score(doc=4406,freq=28.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.3206851 = fieldWeight in 4406, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
        0.004920352 = product of:
          0.009840704 = sum of:
            0.009840704 = weight(_text_:29 in 4406) [ClassicSimilarity], result of:
              0.009840704 = score(doc=4406,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.13602862 = fieldWeight in 4406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
        0.005505745 = weight(_text_:in in 4406) [ClassicSimilarity], result of:
          0.005505745 = score(doc=4406,freq=28.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.19681457 = fieldWeight in 4406, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
        0.008445636 = product of:
          0.016891273 = sum of:
            0.016891273 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
              0.016891273 = score(doc=4406,freq=6.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23454636 = fieldWeight in 4406, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Das Neueste zuerst Die Herausgeber der DDC präsentierten eine neue Informationsplattform "025.431: The Dewey blog"; seit Anfang Juli erreichbar unter http://ddc.typepad.com/. Neu ist auch der fünfsprachige, mit einem Farbleitsystem ausgestattete "DeweyBrowser" von OCLC; der Protoyp führt bereits in einen Katalog von 125.000 e-books und kann unter http://ddcresearch.oclc.org ,i ebooks/fileServer erprobt werden. OCLC bietet seit April 2005 eine neue Current Awareness-Schiene zur DDC mit unterschiedlichen Schwerpunkten an: Dewey Mappings, Dewey News, DeweyTips, Dewey Updates, Deweyjournal (letzteres fängt Themen aus allen 4 Teilbereichen auf); zu subskribieren unter http://www.oclc.org/dewey/syndicated/rss.htm. Wichtig für Freihandaufstellungen Die Segmentierung von Dewey-Notationen wurde reduziert! Ab September 2005 vergibt LoC nur noch ein einziges Segmentierungszeichen, und zwar an der Stelle, an der die jeweilige Notation in der englischen Kurzausgabe endet. Der Beginn einer Teilnotation aus Hilfstafel 1: Standardunterteilungen, wird also nun nicht mehr markiert. Für die Bildung von Standortsignaturen bietet sich das Dewey Cutter Programm an; Downloaden unter www.oclc.org/dewey/support/program.
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
    Date
    6.11.2005 12:27:29
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.3, S.89-91
  6. Satyapal, B.G.; Satyapal, N.S.: SATSAN AUTOMATRIX Version 1 : a computer programme for synthesis of Colon class number according to the postulational approach (2006) 0.01
    0.010889276 = product of:
      0.047912814 = sum of:
        0.007892453 = weight(_text_:und in 1492) [ClassicSimilarity], result of:
          0.007892453 = score(doc=1492,freq=4.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.17315367 = fieldWeight in 1492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
        0.007892453 = weight(_text_:und in 1492) [ClassicSimilarity], result of:
          0.007892453 = score(doc=1492,freq=4.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.17315367 = fieldWeight in 1492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
        0.0070290747 = product of:
          0.014058149 = sum of:
            0.014058149 = weight(_text_:29 in 1492) [ClassicSimilarity], result of:
              0.014058149 = score(doc=1492,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.19432661 = fieldWeight in 1492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1492)
          0.5 = coord(1/2)
        0.00364095 = weight(_text_:in in 1492) [ClassicSimilarity], result of:
          0.00364095 = score(doc=1492,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 1492, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
        0.02145788 = weight(_text_:computer in 1492) [ClassicSimilarity], result of:
          0.02145788 = score(doc=1492,freq=4.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.28550854 = fieldWeight in 1492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
      0.22727273 = coord(5/22)
    
    Abstract
    Describes the features und capabilities of the software SATSAN AUTOMATRIX version 1 for semi-automatic synthesis of Colon Class number (CCN) for a given subject according to the Postulational Approach formulated by S.R. Ranganathan. The present Auto-Matrix version l gives the user more facilities to carry out facet analysis of a subject (simple, compound. or complex) preparatory to synthesizing the corresponding CCN. The software also enables searching for and using previously constructed class numbers automatically, maintenance and use of databases of CC Index, facet formulae and CC schedules for subjects going with different Basic Subjects. The paper begins with a brief account of the authors' consultations with und directions received from. Prof A. Neelameghan in the course of developing the software. Oracle 8 and VB6 have been used in writing the programmes. But for operating SATSAN it is not necessary for users to he proficient in VB6 and Oracle 8 languages. Any computer literate with the basic knowledge of Microsoft Word will he able to use this application software.
    Date
    29. 2.2008 16:33:52
  7. Tunkelang, D.: Faceted search (2009) 0.01
    0.010509195 = product of:
      0.05780057 = sum of:
        0.038059693 = weight(_text_:lecture in 26) [ClassicSimilarity], result of:
          0.038059693 = score(doc=26,freq=2.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.2859867 = fieldWeight in 26, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.03125 = fieldNorm(doc=26)
        0.004449314 = weight(_text_:in in 26) [ClassicSimilarity], result of:
          0.004449314 = score(doc=26,freq=14.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15905021 = fieldWeight in 26, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=26)
        0.0121384105 = weight(_text_:computer in 26) [ClassicSimilarity], result of:
          0.0121384105 = score(doc=26,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.16150802 = fieldWeight in 26, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=26)
        0.0031531493 = product of:
          0.0063062985 = sum of:
            0.0063062985 = weight(_text_:science in 26) [ClassicSimilarity], result of:
              0.0063062985 = score(doc=26,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.11641272 = fieldWeight in 26, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=26)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a "memex" that Vannevar Bush proposed in his seminal article, "As We May Think." Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more effective information-seeking support to users than best-first search. Indeed, faceted search has become increasingly prevalent in online information access systems, particularly for e-commerce and site search. In this lecture, we explore the history, theory, and practice of faceted search. Although we cannot hope to be exhaustive, our aim is to provide sufficient depth and breadth to offer a useful resource to both researchers and practitioners. Because faceted search is an area of interest to computer scientists, information scientists, interface designers, and usability researchers, we do not assume that the reader is a specialist in any of these fields. Rather, we offer a self-contained treatment of the topic, with an extensive bibliography for those who would like to pursue particular aspects in more depth.
  8. Place, E.: International collaboration on Internet subject gateways (2000) 0.01
    0.01036824 = product of:
      0.05702532 = sum of:
        0.015784906 = weight(_text_:und in 4584) [ClassicSimilarity], result of:
          0.015784906 = score(doc=4584,freq=16.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.34630734 = fieldWeight in 4584, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4584)
        0.015784906 = weight(_text_:und in 4584) [ClassicSimilarity], result of:
          0.015784906 = score(doc=4584,freq=16.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.34630734 = fieldWeight in 4584, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4584)
        0.00364095 = weight(_text_:in in 4584) [ClassicSimilarity], result of:
          0.00364095 = score(doc=4584,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 4584, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4584)
        0.02181456 = sum of:
          0.0078828735 = weight(_text_:science in 4584) [ClassicSimilarity], result of:
            0.0078828735 = score(doc=4584,freq=2.0), product of:
              0.0541719 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.02056547 = queryNorm
              0.1455159 = fieldWeight in 4584, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4584)
          0.013931687 = weight(_text_:22 in 4584) [ClassicSimilarity], result of:
            0.013931687 = score(doc=4584,freq=2.0), product of:
              0.072016776 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02056547 = queryNorm
              0.19345059 = fieldWeight in 4584, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4584)
      0.18181819 = coord(4/22)
    
    Abstract
    Eine ganze Anzahl von Bibliotheken in Europa befaßt sich mit der Entwicklung von Internet Subject Gateways - einer Serviceleistung, die den Nutzern helfen soll, qualitativ hochwertige Internetquellen zu finden. Subject Gateways wie SOSIG (The Social Science Information Gateway) sind bereits seit einigen Jahren im Internet verfügbar und stellen eine Alternative zu Internet-Suchmaschinen wie AltaVista und Verzeichnissen wie Yahoo dar. Bezeichnenderweise stützen sich Subject Gateways auf die Fertigkeiten, Verfahrensweisen und Standards der internationalen Bibliothekswelt und wenden diese auf Informationen aus dem Internet an. Dieses Referat will daher betonen, daß Bibliothekare/innen idealerweise eine vorherrschende Rolle im Aufbau von Suchservices für Internetquellen spielen und daß Information Gateways eine Möglichkeit dafür darstellen. Es wird einige der Subject Gateway-Initiativen in Europa umreißen und die Werkzeuge und Technologien beschreiben, die vom Projekt DESIRE entwickelt wurden, um die Entwicklung neuer Gateways in anderen Ländern zu unterstützen. Es wird auch erörtert, wie IMesh, eine Gruppe für Gateways aus der ganzen Welt eine internationale Strategie für Gateways anstrebt und versucht, Standards zur Umsetzung dieses Projekts zu entwickeln
    Date
    22. 6.2002 19:35:35
  9. Sydler, J.-P.: UDC-Automatisierung und ihre Folgerungen (1978) 0.01
    0.009539937 = product of:
      0.05246965 = sum of:
        0.013532738 = weight(_text_:und in 1414) [ClassicSimilarity], result of:
          0.013532738 = score(doc=1414,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 1414, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1414)
        0.013532738 = weight(_text_:und in 1414) [ClassicSimilarity], result of:
          0.013532738 = score(doc=1414,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 1414, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1414)
        0.004161952 = weight(_text_:in in 1414) [ClassicSimilarity], result of:
          0.004161952 = score(doc=1414,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.14877784 = fieldWeight in 1414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1414)
        0.021242218 = weight(_text_:computer in 1414) [ClassicSimilarity], result of:
          0.021242218 = score(doc=1414,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.28263903 = fieldWeight in 1414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1414)
      0.18181819 = coord(4/22)
    
    Abstract
    Die ersten automatisierten Dokumentationsrecherchen basierten auf dem Vorhandensein von Suchwörtern in den bibliographischen Notizen. Um die Arbeit der Benutzer zu vereinfachen, wurden die Synonyme durch ein linguistisches Verfahren, und die benachbarten Begriffe durch eine Systematik verknüpft. So enstanden Thesauri, die die Suche nach Begriffen und nicht mehr nach Wörtern erlauben. Die Grundklassifikation sollte dezimal sein, um den Entscheidungsprozep vor dem Bildschirm zu erlauben. Die UDC als möglich Lösung könnte die heterogenen Systematiken der verschiedenen Thesauri vereinheitlichen. Dabei würde der Benutzer nur die linguistische Seite, der Computer aber den systematischen Teil gebrauchen
    Source
    Kooperation in der Klassifikation II. Proc. der Sekt.4-6 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  10. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.009063724 = product of:
      0.06646731 = sum of:
        0.0058858884 = weight(_text_:in in 2509) [ClassicSimilarity], result of:
          0.0058858884 = score(doc=2509,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.21040362 = fieldWeight in 2509, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.030041033 = weight(_text_:computer in 2509) [ClassicSimilarity], result of:
          0.030041033 = score(doc=2509,freq=4.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.39971197 = fieldWeight in 2509, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.030540384 = sum of:
          0.011036023 = weight(_text_:science in 2509) [ClassicSimilarity], result of:
            0.011036023 = score(doc=2509,freq=2.0), product of:
              0.0541719 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.02056547 = queryNorm
              0.20372227 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2509)
          0.01950436 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
            0.01950436 = score(doc=2509,freq=2.0), product of:
              0.072016776 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02056547 = queryNorm
              0.2708308 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2509)
      0.13636364 = coord(3/22)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  11. Buxton, A.B.: Computer searching of UDC numbers (1990) 0.01
    0.008618605 = product of:
      0.063203104 = sum of:
        0.013393938 = weight(_text_:und in 5406) [ClassicSimilarity], result of:
          0.013393938 = score(doc=5406,freq=2.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
        0.013393938 = weight(_text_:und in 5406) [ClassicSimilarity], result of:
          0.013393938 = score(doc=5406,freq=2.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
        0.03641523 = weight(_text_:computer in 5406) [ClassicSimilarity], result of:
          0.03641523 = score(doc=5406,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.48452407 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
      0.13636364 = coord(3/22)
    
    Footnote
    Vgl. auch die Beiträge von Hermes / Bischoff und Gödert bzw. das DORS Projekt im Zusammenhang mit der DDC
  12. Gödert, W.: Systematisches Suchen und Orientierung in Datenbanken (1995) 0.01
    0.007993746 = product of:
      0.0586208 = sum of:
        0.026787875 = weight(_text_:und in 1465) [ClassicSimilarity], result of:
          0.026787875 = score(doc=1465,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.58770305 = fieldWeight in 1465, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=1465)
        0.026787875 = weight(_text_:und in 1465) [ClassicSimilarity], result of:
          0.026787875 = score(doc=1465,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.58770305 = fieldWeight in 1465, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=1465)
        0.0050450475 = weight(_text_:in in 1465) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=1465,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 1465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=1465)
      0.13636364 = coord(3/22)
    
    Imprint
    Oldenburg : Bibliotheks- und Informationssystem
    Source
    Zwischen Schreiben und Lesen: Perspektiven für Bibliotheken, Wissenschaft und Kultur. Festschrift zum 60. Geburtstag von Hermann Havekost. Hrsg. von H.-J. Wätjen
  13. Liu, S.: Decomposing DDC synthesized numbers (1996) 0.01
    0.007921227 = product of:
      0.058089 = sum of:
        0.032426912 = weight(_text_:notes in 5969) [ClassicSimilarity], result of:
          0.032426912 = score(doc=5969,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.29513517 = fieldWeight in 5969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5969)
        0.0042042066 = weight(_text_:in in 5969) [ClassicSimilarity], result of:
          0.0042042066 = score(doc=5969,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15028831 = fieldWeight in 5969, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5969)
        0.02145788 = weight(_text_:computer in 5969) [ClassicSimilarity], result of:
          0.02145788 = score(doc=5969,freq=4.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.28550854 = fieldWeight in 5969, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5969)
      0.13636364 = coord(3/22)
    
    Abstract
    Much literature has been written speculating upon how classification can be used in online catalogs to improve information retrieval. While some empirical studies have been done exploring whether the direct use of traditional classification schemes designed for a manual environment is effective and efficient in the online environment, none has manipulated these manual classifications in such a w ay as to take full advantage of the power of both the classification and computer. It has been suggested by some authors, such as Wajenberg and Drabenstott, that this power could be realized if the individual components of synthesized DDC numbers could be identified and indexed. This paper looks at the feasibility of automatically decomposing DDC synthesized numbers and the implications of such decomposition for information retrieval. Based on an analysis of the instructions for synthesizing numbers in the main class Arts (700) and all DDC Tables, 17 decomposition rules were defined, 13 covering the Add Notes and four the Standard Subdivisions. 1,701 DDC synthesized numbers were decomposed by a computer system called DND (Dewey Number Decomposer), developed by the author. From the 1,701 numbers, 600 were randomly selected fo r examination by three judges, each evaluating 200 numbers. The decomposition success rate was 100% and it was concluded that synthesized DDC numbers can be accurately decomposed automatically. The study has implications for information retrieval, expert systems for assigning DDC numbers, automatic indexing, switching language development, enhancing classifiers' work, teaching library school students, and providing quality control for DDC number assignments. These implications were explored using a prototype retrieval system.
  14. Bambey, D.: Thesauri und Klassifikationen im Netz : Neue Herausforderungen für klassische Werkzeuge (2000) 0.01
    0.007548157 = product of:
      0.05535315 = sum of:
        0.025574472 = weight(_text_:und in 5505) [ClassicSimilarity], result of:
          0.025574472 = score(doc=5505,freq=42.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.561082 = fieldWeight in 5505, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
        0.025574472 = weight(_text_:und in 5505) [ClassicSimilarity], result of:
          0.025574472 = score(doc=5505,freq=42.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.561082 = fieldWeight in 5505, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
        0.0042042066 = weight(_text_:in in 5505) [ClassicSimilarity], result of:
          0.0042042066 = score(doc=5505,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15028831 = fieldWeight in 5505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
      0.13636364 = coord(3/22)
    
    Abstract
    Die verstärkte Diskussion über qualitativ bessere Such- und Erschließungsmethoden im Internet führt auch dazu, dass Thesauri und Klassifikation bei Fachanbietern und im wissenschaftlich-bibliothekarischen Bereich verstärkt wieder Thema und auch Gegenstand von Projekten geworden sind. Solche Konjunkturschwankungen sind ein bekanntes Phänomen, denn schon immer haben fachlich-methodische Instrumente in Zeiten technologischer Schübe schlechte Konjunktur. Wenn die technologischen Machbarkeiten dann kritisch überdacht werden müssen und die Probleme der Qualitätssicherung ins Auge fallen, rückt das Problem der Vermittlung technologischer Verfahren mit sach- und inhaltsbezogenen Anforderungen unweigerlich wieder stärker in den Mittelpunkt des Interesses'. Meine Ausführungen richten sich vor allem auf aktuelle Probleme der Produktion und Wiedergewinnung von Informationen oder präziser: von Fachinformationen, Fragen der Qualitätssicherung und die Rolle, die Klassifikationen und Thesauri in diesem Zusammenhang spielen oder spielen könnten. Insbesondere der Aspekt der Nutzerakzeptanz wird hier stärker thematisiert. Der Punkt nettere Ansätze wird etwas eingehender am Beispiel der Vernetzung verschiedener Thesauri und Klassifikationen mittels sogenannter Cross-Konkordanzen erläutert. Im Folgenden beziehe ich mich vor allem auf die Sozialwissenschaften und insbesondere die Erziehungswissenschaft. Dies ist der fachliche Background des Fachinformationssystem Bildung, und des Deutschen Bildungsservers in deren Kontext ich mit den hier angesprochenen Problemen befasst bin
    Series
    Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI); Bd.1)(Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V.; Bd.3
    Source
    Information und Öffentlichkeit: 1. Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI), Leipzig, 20.-23.3.2000. Zugleich 90. Deutscher Bibliothekartag, 52. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI). Hrsg.: G. Ruppelt u. H. Neißer
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.007356204 = product of:
      0.040459123 = sum of:
        0.011161615 = weight(_text_:und in 611) [ClassicSimilarity], result of:
          0.011161615 = score(doc=611,freq=2.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.24487628 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.011161615 = weight(_text_:und in 611) [ClassicSimilarity], result of:
          0.011161615 = score(doc=611,freq=2.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.24487628 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.0042042066 = weight(_text_:in in 611) [ClassicSimilarity], result of:
          0.0042042066 = score(doc=611,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15028831 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.013931687 = product of:
          0.027863374 = sum of:
            0.027863374 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.027863374 = score(doc=611,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
  16. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.0072292006 = product of:
      0.0397606 = sum of:
        0.013532738 = weight(_text_:und in 4321) [ClassicSimilarity], result of:
          0.013532738 = score(doc=4321,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 4321, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4321)
        0.013532738 = weight(_text_:und in 4321) [ClassicSimilarity], result of:
          0.013532738 = score(doc=4321,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 4321, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4321)
        0.0029429442 = weight(_text_:in in 4321) [ClassicSimilarity], result of:
          0.0029429442 = score(doc=4321,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.10520181 = fieldWeight in 4321, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4321)
        0.00975218 = product of:
          0.01950436 = sum of:
            0.01950436 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.01950436 = score(doc=4321,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  17. Qualität in der Inhaltserschließung (2021) 0.01
    0.007121384 = product of:
      0.05222348 = sum of:
        0.02319898 = weight(_text_:und in 753) [ClassicSimilarity], result of:
          0.02319898 = score(doc=753,freq=54.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.5089658 = fieldWeight in 753, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
        0.02319898 = weight(_text_:und in 753) [ClassicSimilarity], result of:
          0.02319898 = score(doc=753,freq=54.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.5089658 = fieldWeight in 753, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
        0.0058255196 = weight(_text_:in in 753) [ClassicSimilarity], result of:
          0.0058255196 = score(doc=753,freq=24.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.2082456 = fieldWeight in 753, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
      0.13636364 = coord(3/22)
    
    Abstract
    Der 70. Band der BIPRA-Reihe beschäftigt sich mit der Qualität in der Inhaltserschließung im Kontext etablierter Verfahren und technologischer Innovationen. Treffen heterogene Erzeugnisse unterschiedlicher Methoden und Systeme aufeinander, müssen minimale Anforderungen an die Qualität der Inhaltserschließung festgelegt werden. Die Qualitätsfrage wird zurzeit in verschiedenen Zusammenhängen intensiv diskutiert und im vorliegenden Band aufgegriffen. In diesem Themenfeld aktive Autor:innen beschreiben aus ihrem jeweiligen Blickwinkel unterschiedliche Aspekte zu Metadaten, Normdaten, Formaten, Erschließungsverfahren und Erschließungspolitik. Der Band versteht sich als Handreichung und Anregung für die Diskussion um die Qualität in der Inhaltserschließung.
    Content
    Inhalt: Editorial - Michael Franke-Maier, Anna Kasprzik, Andreas Ledl und Hans Schürmann Qualität in der Inhaltserschließung - Ein Überblick aus 50 Jahren (1970-2020) - Andreas Ledl Fit for Purpose - Standardisierung von inhaltserschließenden Informationen durch Richtlinien für Metadaten - Joachim Laczny Neue Wege und Qualitäten - Die Inhaltserschließungspolitik der Deutschen Nationalbibliothek - Ulrike Junger und Frank Scholze Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata - Lydia Pintscher, Peter Bourgonje, Julián Moreno Schneider, Malte Ostendorff und Georg Rehm Qualitätssicherung in der GND - Esther Scheven Qualitätskriterien und Qualitätssicherung in der inhaltlichen Erschließung - Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) Coli-conc - Eine Infrastruktur zur Nutzung und Erstellung von Konkordanzen - Uma Balakrishnan, Stefan Peters und Jakob Voß Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten - Clemens Neudecker, Karolina Zaczynska, Konstantin Baierer, Georg Rehm, Mike Gerber und Julián Moreno Schneider Datenqualität als Grundlage qualitativer Inhaltserschließung - Jakob Voß Bemerkungen zu der Qualitätsbewertung von MARC-21-Datensätzen - Rudolf Ungváry und Péter Király Named Entity Linking mit Wikidata und GND - Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten - Sina Menzel, Hannes Schnaitter, Josefine Zinck, Vivien Petras, Clemens Neudecker, Kai Labusch, Elena Leitner und Georg Rehm Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) - Fabian Steeg und Adrian Pohl Verbale Erschließung in Katalogen und Discovery-Systemen - Überlegungen zur Qualität - Heidrun Wiesenmüller Inhaltserschließung für Discovery-Systeme gestalten - Jan Frederik Maas Evaluierung von Verschlagwortung im Kontext des Information Retrievals - Christian Wartena und Koraljka Golub Die Qualität der Fremddatenanreicherung FRED - Cyrus Beck Quantität als Qualität - Was die Verbünde zur Verbesserung der Inhaltserschließung beitragen können - Rita Albrecht, Barbara Block, Mathias Kratzer und Peter Thiessen Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung - Harald Sack
    Footnote
    Vgl.: https://www.degruyter.com/document/doi/10.1515/9783110691597/html. DOI: https://doi.org/10.1515/9783110691597. Rez. in: Information - Wissenschaft und Praxis 73(2022) H.2-3, S.131-132 (B. Lorenz u. V. Steyer). Weitere Rezension in: o-bib 9(20229 Nr.3. (Martin Völkl) [https://www.o-bib.de/bib/article/view/5843/8714].
    Series
    Bibliotheks- und Informationspraxis; 70
  18. Classification theory in the computer age : Conversations across the disciplines. Proceedings from the Conference, Nov. 18.-19, 1988, Albany, New York (1989) 0.01
    0.006850752 = product of:
      0.050238848 = sum of:
        0.007134775 = weight(_text_:in in 2071) [ClassicSimilarity], result of:
          0.007134775 = score(doc=2071,freq=16.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.25504774 = fieldWeight in 2071, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2071)
        0.03641523 = weight(_text_:computer in 2071) [ClassicSimilarity], result of:
          0.03641523 = score(doc=2071,freq=8.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.48452407 = fieldWeight in 2071, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2071)
        0.00668884 = product of:
          0.01337768 = sum of:
            0.01337768 = weight(_text_:science in 2071) [ClassicSimilarity], result of:
              0.01337768 = score(doc=2071,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.24694869 = fieldWeight in 2071, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2071)
          0.5 = coord(1/2)
      0.13636364 = coord(3/22)
    
    Abstract
    Proceedings of a conference addressing issues in classification theory and practice especially oriented towards online environments.
    Content
    Enthält die Beiträge: D. BATTY: The future of DDC in the perspective of current classification research; I. DAHLBERG: Concept and definiton theory; I.L. TRAVIS: Application of artificial intelligence to bibliographic classification; E. SVENONIUS: An ideal classification for an on-line catalog; K. MARKEY u. A.N. DEMEYER: The concept of common subject headings in subject outline searching; N. WILLIAMSON: The Library of Congress Classification in the Computer age; D.S. SCOTT: Subject classification and natural-language processing for retrieval in large databases; F. MIKSA: Shifting directions in LIS classification; C. MANDEL: A computer age classification: implications for library practice; R.S. HALSEY: Implications of classification theory in the computer age for educators of librarians and information science professionals; J. HOLIDAY: Subject access: new technology and philosophical perspectives
    Editor
    School of Information Science and Policy
  19. Gödert, W.: Facettenklassifikation im Online-Retrieval (1992) 0.01
    0.006680989 = product of:
      0.03674544 = sum of:
        0.013532738 = weight(_text_:und in 4574) [ClassicSimilarity], result of:
          0.013532738 = score(doc=4574,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 4574, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4574)
        0.013532738 = weight(_text_:und in 4574) [ClassicSimilarity], result of:
          0.013532738 = score(doc=4574,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2968967 = fieldWeight in 4574, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4574)
        0.004161952 = weight(_text_:in in 4574) [ClassicSimilarity], result of:
          0.004161952 = score(doc=4574,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.14877784 = fieldWeight in 4574, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4574)
        0.0055180113 = product of:
          0.011036023 = sum of:
            0.011036023 = weight(_text_:science in 4574) [ClassicSimilarity], result of:
              0.011036023 = score(doc=4574,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20372227 = fieldWeight in 4574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4574)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    Facettenklassifikationen wurden bislang vorwiegend im Hinblick auf ihre Verwendungsmöglichkeiten in präkombinierten systematischen Katalogen bzw. Bibliographien betrachtet, nicht so sehr unter dem Aspekt eines möglichen Einsatzes in postkoordinierenden Retrievalsystemen. Im vorliegenden Beitrag soll nachgewiesen werden, daß Facettenklassifikationen anderen Techniken des Online Retrievals überlegen sein können. Hierzu sollten Begriffs- und Facettenanalyse mit einem strukturabbildenden Notationssystem kombiniert werden, um mit Hilfe Boolescher Operatoren (zur Verknüpfung von Facetten unabhängig von einer definierten Citation order) und Truncierung hierarchisch differenzierte Dokumentenmengen für komplexe Fragestellungen zu erhalten. Die Methode wird an zwei Beispielen illustriert: das erste nutzt eine kleine, von B. Buchanan entwickelte Klassifikation, das zweite das für Library and Information Science Abstracts (LISA) verwendete Klassifikationssystem. Weiter wird am Beispiel PRECIS diskutiert, welche Möglichkeiten des syntaktischen Retrievals Rollenoperatoren bieten können.
    Source
    Bibliothek: Forschung und Praxis. 16(1992) H.3, S.382-395
  20. Neelameghan, A.: S.R. Ranganathan's general theory of knowledge classification in designing, indexing and retrieving from specialised databases (1997) 0.01
    0.006437646 = product of:
      0.047209404 = sum of:
        0.038912293 = weight(_text_:notes in 3) [ClassicSimilarity], result of:
          0.038912293 = score(doc=3,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.35416222 = fieldWeight in 3, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.046875 = fieldNorm(doc=3)
        0.0035673876 = weight(_text_:in in 3) [ClassicSimilarity], result of:
          0.0035673876 = score(doc=3,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.12752387 = fieldWeight in 3, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 3) [ClassicSimilarity], result of:
              0.009459447 = score(doc=3,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3)
          0.5 = coord(1/2)
      0.13636364 = coord(3/22)
    
    Abstract
    Summarizes some experiences of the application of the priciples and postulates of S.R. Ranganathan's General Theory of Knowledge Classification, incorporating the freely faceted approach and analytico synthetic methods, to the design and development of specialized databases, including indexing, user interfaces and retrieval. Enumerates some of the earlier instances of the facet method in machine based systems, beginning with Hollerith's punched card system for the data processing of the US Census. Elaborates on Ranganathan's holistic approach to information systems and services provided by his normative principles. Notes similarities between the design of databases and faceted classification systems. Examples from working systems are given to demonstrate the usefulness of selected canons and principles of classification and the analytico synthetic methodology to database design. The examples are mostly operational database systems developed using Unesco's Micro CDS-ISIS software
    Source
    Library science with a slant to documentation and information studies. 34(1997) no.1, S.3-53

Years

Languages

Types

  • a 227
  • el 33
  • m 11
  • s 9
  • h 2
  • p 2
  • r 2
  • x 2
  • More… Less…