Search (120 results, page 1 of 6)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Gödert, W.: Systematisches Suchen und Orientierung in Datenbanken (1995) 0.03
    0.033442385 = product of:
      0.06688477 = sum of:
        0.06406432 = weight(_text_:von in 1465) [ClassicSimilarity], result of:
          0.06406432 = score(doc=1465,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.5002404 = fieldWeight in 1465, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=1465)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 1465) [ClassicSimilarity], result of:
              0.008461362 = score(doc=1465,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1465)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Zwischen Schreiben und Lesen: Perspektiven für Bibliotheken, Wissenschaft und Kultur. Festschrift zum 60. Geburtstag von Hermann Havekost. Hrsg. von H.-J. Wätjen
    Type
    a
  2. Kluck, M.: Weiterentwicklung der Instrumente für die inhaltliche Erschließung und für Recherchen (1996) 0.03
    0.03114036 = product of:
      0.06228072 = sum of:
        0.060400415 = weight(_text_:von in 4849) [ClassicSimilarity], result of:
          0.060400415 = score(doc=4849,freq=8.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.47163114 = fieldWeight in 4849, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=4849)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 4849) [ClassicSimilarity], result of:
              0.005640907 = score(doc=4849,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 4849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4849)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Für die Recherchen in den Datenbanken FORIS und SOLIS bietet das IZ verschiedene Einstiegsmöglichkeiten, die - abhängig von der Fragestellung - einzeln oder kombiniert genutzt werden können: bibliographische Elemente, wie z.B. Namen von Autoren oder Projektleitern, Erscheinungsjahre von Veröffentlichungen oder Laufzeiten von Projekten, sowie inhaltlich orientierte Einstiege, wie z.B. Textrecherchen, Klassifikationen oder Schlagwörter
    Type
    a
  3. Koch, T.: Nutzung von Klassifikationssystemen zur verbesserten Beschreibung, Organisation und Suche von Internetressourcen (1998) 0.03
    0.03114036 = product of:
      0.06228072 = sum of:
        0.060400415 = weight(_text_:von in 1030) [ClassicSimilarity], result of:
          0.060400415 = score(doc=1030,freq=8.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.47163114 = fieldWeight in 1030, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=1030)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 1030) [ClassicSimilarity], result of:
              0.005640907 = score(doc=1030,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 1030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1030)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In den ersten Jahren der Internetdienste wurde auf Klassifikationen weitgehend verzichtet. Viele Anbieter argumentierten, daß diese wie auch andere Metadaten von der Entwicklung aufgrund der Volltextindices überholt sind. Inzwischen hat sich das Blatt gewendet: Die meisten der großen Suchdienste bieten eine mehr oder minder ausgefeilte Klassifikation an. eine Reihe von Internetdiensten verwendet etablierte Bibliotheksklassifikationssysteme; deren Einsatzbereiche, die Vor- und Nachteile sowie Anwendungsbeispiele sind Thema dieses Aufsatzes
    Type
    a
  4. Reisser, M.: ¬Die Darstellung begrifflicher Kontexte im Online-Retrieval (1995) 0.03
    0.030366883 = product of:
      0.060733765 = sum of:
        0.059088502 = weight(_text_:von in 934) [ClassicSimilarity], result of:
          0.059088502 = score(doc=934,freq=10.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.4613872 = fieldWeight in 934, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=934)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 934) [ClassicSimilarity], result of:
              0.004935794 = score(doc=934,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=934)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die traditionellen Verfahren der klassifikatorischen sachlichen Erschließung von Medieninhalten heben auf die Eindeutigkeit der Zuordnung von Objekt (Medieninhalt) und Klasse ab. Sie unterscheiden sich damit in der Praxis des Online-Retrievals nur unwesentlich von den Verfahren der verbalen sachlichen Erschließung (z.B. Thesauri, SWD/RSWK). Auf der Grundlage der in der Sacherschließungstheorie vorgenommenen Unterscheidung von klassifikatorischer und verbaler Sacherschließung werden mögliche und bereits in Anwendung befindliche Verfahren diskutiert, welche die begrifflichen Kontexte der Klassen für den Anwender einer bibliothekarischen Klassifikation im Online-Retrieval nutzbar machen
    Source
    Aufbau und Erschließung begrifflicher Datenbanken: Beiträge zur bibliothekarischen Klassifikation. Eine Auswahl von Vorträgen der Jahrestagungen 1993 (Kaiserslautern) und 1994 (Oldenburg) der Gesellschaft für Klassifikation. Hrsg.: H. Havekost u. H.-J. Wätjen
    Type
    a
  5. Gödert, W.: Strukturierung von Klassifikationssystemen und Online-Retrieval (1995) 0.03
    0.027868655 = product of:
      0.05573731 = sum of:
        0.05338693 = weight(_text_:von in 922) [ClassicSimilarity], result of:
          0.05338693 = score(doc=922,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.416867 = fieldWeight in 922, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=922)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 922) [ClassicSimilarity], result of:
              0.007051134 = score(doc=922,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=922)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Aufbau und Erschließung begrifflicher Datenbanken: Beiträge zur bibliothekarischen Klassifikation. Eine Auswahl von Vorträgen der Jahrestagungen 1993 (Kaiserslautern) und 1994 (Oldenburg) der Gesellschaft für Klassifikation. Hrsg.: H. Havekost u. H.-J. Wätjen
    Type
    a
  6. Zimmermann, H.H.: Zur Struktur und Nutzung von Klassifikationen im Bibliothekswesen : Beispiel der Klassifikation der Deutschen Bibliothek und der sog. Niederländischen Basiskklassifikation (1994) 0.03
    0.027868655 = product of:
      0.05573731 = sum of:
        0.05338693 = weight(_text_:von in 6027) [ClassicSimilarity], result of:
          0.05338693 = score(doc=6027,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.416867 = fieldWeight in 6027, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=6027)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 6027) [ClassicSimilarity], result of:
              0.007051134 = score(doc=6027,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 6027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6027)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Mehrwert von Information - Professionalisierung der Informationsarbeit: Proceedings des 4. Internationalen Symposiums für Informationswissenschaft (ISI'94), Graz, 2.-4. November 1994. Hrsg.: W. Rauch u.a
    Type
    a
  7. Buxton, A.B.: Computer searching of UDC numbers (1990) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 5406) [ClassicSimilarity], result of:
          0.04530031 = score(doc=5406,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 5406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 5406) [ClassicSimilarity], result of:
              0.008461362 = score(doc=5406,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 5406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5406)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. auch die Beiträge von Hermes / Bischoff und Gödert bzw. das DORS Projekt im Zusammenhang mit der DDC
    Type
    a
  8. Geißelmann, F.: Online-Version einer Aufstellungssystematik (1995) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 929) [ClassicSimilarity], result of:
          0.04530031 = score(doc=929,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=929)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 929) [ClassicSimilarity], result of:
              0.008461362 = score(doc=929,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 929, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=929)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Aufbau und Erschließung begrifflicher Datenbanken: Beiträge zur bibliothekarischen Klassifikation. Eine Auswahl von Vorträgen der Jahrestagungen 1993 (Kaiserslautern) und 1994 (Oldenburg) der Gesellschaft für Klassifikation. Hrsg.: H. Havekost u. H.-J. Wätjen
    Type
    a
  9. Gödert, W.: Facettenklassifikation im Online-Retrieval (1992) 0.02
    0.02370751 = product of:
      0.04741502 = sum of:
        0.04576976 = weight(_text_:von in 4574) [ClassicSimilarity], result of:
          0.04576976 = score(doc=4574,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 4574, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4574)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 4574) [ClassicSimilarity], result of:
              0.004935794 = score(doc=4574,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 4574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4574)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Facettenklassifikationen wurden bislang vorwiegend im Hinblick auf ihre Verwendungsmöglichkeiten in präkombinierten systematischen Katalogen bzw. Bibliographien betrachtet, nicht so sehr unter dem Aspekt eines möglichen Einsatzes in postkoordinierenden Retrievalsystemen. Im vorliegenden Beitrag soll nachgewiesen werden, daß Facettenklassifikationen anderen Techniken des Online Retrievals überlegen sein können. Hierzu sollten Begriffs- und Facettenanalyse mit einem strukturabbildenden Notationssystem kombiniert werden, um mit Hilfe Boolescher Operatoren (zur Verknüpfung von Facetten unabhängig von einer definierten Citation order) und Truncierung hierarchisch differenzierte Dokumentenmengen für komplexe Fragestellungen zu erhalten. Die Methode wird an zwei Beispielen illustriert: das erste nutzt eine kleine, von B. Buchanan entwickelte Klassifikation, das zweite das für Library and Information Science Abstracts (LISA) verwendete Klassifikationssystem. Weiter wird am Beispiel PRECIS diskutiert, welche Möglichkeiten des syntaktischen Retrievals Rollenoperatoren bieten können.
    Type
    a
  10. Wätjen, H.-J.: GERHARD : Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web (1998) 0.02
    0.019508056 = product of:
      0.039016113 = sum of:
        0.03737085 = weight(_text_:von in 3064) [ClassicSimilarity], result of:
          0.03737085 = score(doc=3064,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29180688 = fieldWeight in 3064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3064)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 3064) [ClassicSimilarity], result of:
              0.004935794 = score(doc=3064,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 3064, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3064)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die intellektuelle Erschließung des Internet befindet sich in einer Krise. Yahoo und andere Dienste können mit dem Wachstum des Web nicht mithalten. GERHARD ist derzeit weltweit der einzige Such- und Navigationsdienst, der die mit einem Roboter gesammelten Internetressourcen mit computerlinguistischen und statistischen Verfahren auch automatisch vollständig klassifiziert. Weit über eine Million HTML-Dokumente von wissenschaftlich relevanten Servern in Deutschland können wie bei anderen Suchmaschinen in der Datenbank gesucht, aber auch über die Navigation in der dreisprachigen Universalen Dezimalklassifikation (ETH-Bibliothek Zürich) recherchiert werden
    Type
    a
  11. Reisser, M.: Anforderungen an bibliothekarische Klassifikationen bei der Verwendung der EDV (1993) 0.01
    0.014035223 = product of:
      0.028070446 = sum of:
        0.026425181 = weight(_text_:von in 5017) [ClassicSimilarity], result of:
          0.026425181 = score(doc=5017,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.20633863 = fieldWeight in 5017, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5017)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 5017) [ClassicSimilarity], result of:
              0.004935794 = score(doc=5017,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 5017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5017)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die Methoden zur formalen und inhaltlichen Erschließung von Medienbeständen in Bibliotheken wurden ausnahmslos vor dem Hintergrund der konventionellen Katalogisierung entwickelt und optimiert. Durch den zunehmenden Einsatz der Computertechnologie in Bibliotheken wird eine kritische Überprüfung der tradierten Erschließungsmethoden erforderlich. Gegenstand dieser Untersuchung ist die bibliothekarische Klassifikation und ihre Verwendung in Online-Publikums-Katalogen (OPAC) und anderen Information-Retrieval-Systemen (IRS). Auf der Grundlage der bibliothekarischen Klassifikationstheorie erfolgt eine Überprüfung der verschiedenen Klassifikationstypen hinsichtlich ihrer Tauglichkeit für die gängigen Recherche-Funktionen in diesen Systemen. Zusätzlich wird ein Anforderungskatalog für die Schlagwort-Register und -indizes entwickelt, der im Online-Dialog den verbalen Zugriff auf die einzelnen Klassen einer bibliothekarischen Klassifikation sicherstellen soll
    Type
    a
  12. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.01
    0.012501333 = product of:
      0.05000533 = sum of:
        0.05000533 = product of:
          0.075008 = sum of:
            0.0099718105 = weight(_text_:a in 3576) [ClassicSimilarity], result of:
              0.0099718105 = score(doc=3576,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.18016359 = fieldWeight in 3576, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
            0.065036185 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
              0.065036185 = score(doc=3576,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.38690117 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Classification structure and indexes to classifications need to be better understood before classification can be a major access point in online catalogs.
    Date
    8. 1.2007 12:22:40
    Type
    a
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.009427017 = product of:
      0.037708066 = sum of:
        0.037708066 = product of:
          0.0565621 = sum of:
            0.011036771 = weight(_text_:a in 1673) [ClassicSimilarity], result of:
              0.011036771 = score(doc=1673,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19940455 = fieldWeight in 1673, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
            0.045525327 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.045525327 = score(doc=1673,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
    Type
    a
  14. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.01
    0.009232819 = product of:
      0.036931276 = sum of:
        0.036931276 = product of:
          0.055396914 = sum of:
            0.009871588 = weight(_text_:a in 57) [ClassicSimilarity], result of:
              0.009871588 = score(doc=57,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 57, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
            0.045525327 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.045525327 = score(doc=57,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
    Type
    a
  15. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.01
    0.008750932 = product of:
      0.03500373 = sum of:
        0.03500373 = product of:
          0.052505594 = sum of:
            0.0069802674 = weight(_text_:a in 2342) [ClassicSimilarity], result of:
              0.0069802674 = score(doc=2342,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12611452 = fieldWeight in 2342, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
            0.045525327 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
              0.045525327 = score(doc=2342,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2708308 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The knowledge structures that form traditional library classification schemes hold great potential for improving resource description and discovery on the Internet and for organizing electronic document collections. The advantages of assigning subject tokens (classes) to documents from a scheme like the DDC system are well documented
    Date
    22. 9.1997 19:16:05
    Type
    a
  16. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.01
    0.00849798 = product of:
      0.03399192 = sum of:
        0.03399192 = product of:
          0.05098788 = sum of:
            0.011966172 = weight(_text_:a in 4190) [ClassicSimilarity], result of:
              0.011966172 = score(doc=4190,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 4190, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
            0.039021708 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.039021708 = score(doc=4190,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Type
    a
  17. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.00849798 = product of:
      0.03399192 = sum of:
        0.03399192 = product of:
          0.05098788 = sum of:
            0.011966172 = weight(_text_:a in 2464) [ClassicSimilarity], result of:
              0.011966172 = score(doc=2464,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 2464, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
            0.039021708 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.039021708 = score(doc=2464,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
    Type
    a
  18. Riesthuis, G.J.A.; Bliedung, S.: Thesaurification of UDC: preliminary report (1990) 0.00
    0.0014248411 = product of:
      0.0056993645 = sum of:
        0.0056993645 = product of:
          0.017098093 = sum of:
            0.017098093 = weight(_text_:a in 258) [ClassicSimilarity], result of:
              0.017098093 = score(doc=258,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.3089162 = fieldWeight in 258, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=258)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    The UDC: Essays for a new decade. Ed.: A. Gilchrist, D. Strachan
    Type
    a
  19. Welty, C.A.; Jenkins, J.: Formal ontology for subject (1999) 0.00
    0.0012437033 = product of:
      0.004974813 = sum of:
        0.004974813 = product of:
          0.014924439 = sum of:
            0.014924439 = weight(_text_:a in 4962) [ClassicSimilarity], result of:
              0.014924439 = score(doc=4962,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.26964417 = fieldWeight in 4962, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4962)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Subject based classification is an important part of information retrieval, and has a long history in libraries, where a subject taxonomy was used to determine the location of books on the shelves. We have been studying the notion of subject itself, in order to determine a formal ontology of subject for a large scale digital library card catalog system. Deep analysis reveals a lot of ambiguity regarding the usage of subjects in existing systems and terminology, and we attempt to formalize these notions into a single framework for representing it.
    Type
    a
  20. Loth, K.; Funk, H.: Subject search on ETHICS on the basis of the UDC (1990) 0.00
    0.0012212924 = product of:
      0.0048851697 = sum of:
        0.0048851697 = product of:
          0.014655508 = sum of:
            0.014655508 = weight(_text_:a in 256) [ClassicSimilarity], result of:
              0.014655508 = score(doc=256,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.26478532 = fieldWeight in 256, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=256)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    The UDC: Essays for a new decade. Ed.: A. Gilchrist, D. Strachan
    Type
    a

Authors

Languages