Search (116 results, page 1 of 6)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.02
    0.017945496 = product of:
      0.06580015 = sum of:
        0.037797503 = weight(_text_:informatik in 2) [ClassicSimilarity], result of:
          0.037797503 = score(doc=2,freq=2.0), product of:
            0.16761672 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.03285009 = queryNorm
            0.2254996 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.02284858 = weight(_text_:software in 2) [ClassicSimilarity], result of:
          0.02284858 = score(doc=2,freq=2.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.17532499 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.0051540704 = product of:
          0.015462211 = sum of:
            0.015462211 = weight(_text_:web in 2) [ClassicSimilarity], result of:
              0.015462211 = score(doc=2,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.14422815 = fieldWeight in 2, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2)
          0.33333334 = coord(1/3)
      0.27272728 = coord(3/11)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
    Field
    Informatik
  2. Saeed, H.; Chaudhry, A.S.: Using Dewey decimal classification scheme (DDC) for building taxonomies for knowledge organisation (2002) 0.02
    0.015618755 = product of:
      0.085903145 = sum of:
        0.07559501 = weight(_text_:informatik in 4461) [ClassicSimilarity], result of:
          0.07559501 = score(doc=4461,freq=2.0), product of:
            0.16761672 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.03285009 = queryNorm
            0.4509992 = fieldWeight in 4461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0625 = fieldNorm(doc=4461)
        0.010308141 = product of:
          0.030924423 = sum of:
            0.030924423 = weight(_text_:web in 4461) [ClassicSimilarity], result of:
              0.030924423 = score(doc=4461,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.2884563 = fieldWeight in 4461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4461)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Terms drawn from DDC indexes and IEEE Web Thesaurus were merged with DDC hierarchies to build a taxonomy in the domain of computer science. When displayed as a directory structure using a shareware tool MyInfo, the resultant taxonomy appeared to be a promising tool for categorisation that can facilitate browsing of information resources in an electronic environment.
    Field
    Informatik
  3. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.02
    0.015527355 = product of:
      0.08540045 = sum of:
        0.023348393 = weight(_text_:und in 4406) [ClassicSimilarity], result of:
          0.023348393 = score(doc=4406,freq=28.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.3206851 = fieldWeight in 4406, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
        0.062052056 = sum of:
          0.03507092 = weight(_text_:allgemein in 4406) [ClassicSimilarity], result of:
            0.03507092 = score(doc=4406,freq=2.0), product of:
              0.17260577 = queryWeight, product of:
                5.254347 = idf(docFreq=627, maxDocs=44218)
                0.03285009 = queryNorm
              0.20318508 = fieldWeight in 4406, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.254347 = idf(docFreq=627, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4406)
          0.02698114 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
            0.02698114 = score(doc=4406,freq=6.0), product of:
              0.11503542 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03285009 = queryNorm
              0.23454636 = fieldWeight in 4406, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4406)
      0.18181819 = coord(2/11)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Das Neueste zuerst Die Herausgeber der DDC präsentierten eine neue Informationsplattform "025.431: The Dewey blog"; seit Anfang Juli erreichbar unter http://ddc.typepad.com/. Neu ist auch der fünfsprachige, mit einem Farbleitsystem ausgestattete "DeweyBrowser" von OCLC; der Protoyp führt bereits in einen Katalog von 125.000 e-books und kann unter http://ddcresearch.oclc.org ,i ebooks/fileServer erprobt werden. OCLC bietet seit April 2005 eine neue Current Awareness-Schiene zur DDC mit unterschiedlichen Schwerpunkten an: Dewey Mappings, Dewey News, DeweyTips, Dewey Updates, Deweyjournal (letzteres fängt Themen aus allen 4 Teilbereichen auf); zu subskribieren unter http://www.oclc.org/dewey/syndicated/rss.htm. Wichtig für Freihandaufstellungen Die Segmentierung von Dewey-Notationen wurde reduziert! Ab September 2005 vergibt LoC nur noch ein einziges Segmentierungszeichen, und zwar an der Stelle, an der die jeweilige Notation in der englischen Kurzausgabe endet. Der Beginn einer Teilnotation aus Hilfstafel 1: Standardunterteilungen, wird also nun nicht mehr markiert. Für die Bildung von Standortsignaturen bietet sich das Dewey Cutter Programm an; Downloaden unter www.oclc.org/dewey/support/program.
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.3, S.89-91
  4. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.013093275 = product of:
      0.048008673 = sum of:
        0.0213947 = weight(_text_:und in 4379) [ClassicSimilarity], result of:
          0.0213947 = score(doc=4379,freq=8.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.29385152 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.0077311057 = product of:
          0.023193317 = sum of:
            0.023193317 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
              0.023193317 = score(doc=4379,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.21634221 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.33333334 = coord(1/3)
        0.018882865 = product of:
          0.03776573 = sum of:
            0.03776573 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.03776573 = score(doc=4379,freq=4.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.27272728 = coord(3/11)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  5. XFML Core - eXchangeable Faceted Metadata Language (2003) 0.01
    0.012728478 = product of:
      0.070006624 = sum of:
        0.05712145 = weight(_text_:software in 6673) [ClassicSimilarity], result of:
          0.05712145 = score(doc=6673,freq=2.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.43831247 = fieldWeight in 6673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=6673)
        0.012885176 = product of:
          0.038655527 = sum of:
            0.038655527 = weight(_text_:web in 6673) [ClassicSimilarity], result of:
              0.038655527 = score(doc=6673,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.36057037 = fieldWeight in 6673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6673)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    The specification for XFML, a markup language designed to handle faceted classifications. Browsing the site (http://www.xfml.org/) will reveal news about XFML and links to related software and web sites. XFML is not an officially recognized Internet standard, but is the de facto standard.
  6. Satyapal, B.G.; Satyapal, N.S.: SATSAN AUTOMATRIX Version 1 : a computer programme for synthesis of Colon class number according to the postulational approach (2006) 0.01
    0.01267789 = product of:
      0.0697284 = sum of:
        0.05712145 = weight(_text_:software in 1492) [ClassicSimilarity], result of:
          0.05712145 = score(doc=1492,freq=8.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.43831247 = fieldWeight in 1492, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
        0.012606948 = weight(_text_:und in 1492) [ClassicSimilarity], result of:
          0.012606948 = score(doc=1492,freq=4.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.17315367 = fieldWeight in 1492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1492)
      0.18181819 = coord(2/11)
    
    Abstract
    Describes the features und capabilities of the software SATSAN AUTOMATRIX version 1 for semi-automatic synthesis of Colon Class number (CCN) for a given subject according to the Postulational Approach formulated by S.R. Ranganathan. The present Auto-Matrix version l gives the user more facilities to carry out facet analysis of a subject (simple, compound. or complex) preparatory to synthesizing the corresponding CCN. The software also enables searching for and using previously constructed class numbers automatically, maintenance and use of databases of CC Index, facet formulae and CC schedules for subjects going with different Basic Subjects. The paper begins with a brief account of the authors' consultations with und directions received from. Prof A. Neelameghan in the course of developing the software. Oracle 8 and VB6 have been used in writing the programmes. But for operating SATSAN it is not necessary for users to he proficient in VB6 and Oracle 8 languages. Any computer literate with the basic knowledge of Microsoft Word will he able to use this application software.
  7. Hanke, M.: Bibliothekarische Klassifikationssysteme im semantischen Web : zu Chancen und Problemen von Linked-data-Repräsentationen ausgewählter Klassifikationssysteme (2014) 0.01
    0.008289056 = product of:
      0.04558981 = sum of:
        0.028302528 = weight(_text_:und in 2463) [ClassicSimilarity], result of:
          0.028302528 = score(doc=2463,freq=14.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.38872904 = fieldWeight in 2463, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2463)
        0.01728728 = product of:
          0.051861838 = sum of:
            0.051861838 = weight(_text_:web in 2463) [ClassicSimilarity], result of:
              0.051861838 = score(doc=2463,freq=10.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.48375595 = fieldWeight in 2463, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2463)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Pflege und Anwendung von Klassifikationssystemen für Informationsressourcen sind traditionell eine Kernkompetenz von Bibliotheken. Diese Systeme sind häufig historisch gewachsen und die Veröffentlichung verschiedener Systeme ist in der Vergangenheit typischerweise durch gedruckte Regelwerke oder proprietäre Datenbanken erfolgt. Die Technologien des semantischen Web erlauben es, Klassifikationssysteme in einer standardisierten und maschinenlesbaren Weise zu repräsentieren, sowie als Linked (Open) Data für die Nachnutzung zugänglich zu machen. Anhand ausgewählter Beispiele von Klassifikationssystemen, die bereits als Linked (Open) Data publiziert wurden, werden in diesem Artikel zentrale semantische und technische Fragen erörtert, sowie mögliche Einsatzgebiete und Chancen dargestellt. So kann beispielsweise die für die Maschinenlesbarkeit erforderliche starke Strukturierung von Daten im semantischen Web zum besseren Verständnis der Klassifikationssysteme beitragen und möglicherweise positive Impulse für ihre Weiterentwicklung liefern. Für das semantische Web aufbereitete Repräsentationen von Klassifikationssystemen können unter anderem zur Kataloganreicherung oder für die anwendungsbezogene Erstellung von Konkordanzen zwischen verschiedenen Klassifikations- bzw. Begriffssystemen genutzt werden..
    Theme
    Semantic Web
  8. Broughton, V.: Finding Bliss on the Web : some problems of representing faceted terminologies in digital environments 0.01
    0.008219329 = product of:
      0.04520631 = sum of:
        0.034272872 = weight(_text_:software in 3532) [ClassicSimilarity], result of:
          0.034272872 = score(doc=3532,freq=2.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.2629875 = fieldWeight in 3532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
        0.010933435 = product of:
          0.032800302 = sum of:
            0.032800302 = weight(_text_:web in 3532) [ClassicSimilarity], result of:
              0.032800302 = score(doc=3532,freq=4.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.3059541 = fieldWeight in 3532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3532)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    The Bliss Bibliographic Classification is the only example of a fully faceted general classification scheme in the Western world. Although it is the object of much interest as a model for other tools it suffers from the lack of a web presence, and remedying this is an immediate objective for its editors. Understanding how this might be done presents some challenges, as the scheme is semantically very rich and complex in the range and nature of the relationships it contains. The automatic management of these is already in place using local software, but exporting this to a common data format needs careful thought and planning. Various encoding schemes, both for traditional classifications, and for digital materials, represent variously: the concepts; their functional roles; and the relationships between them. Integrating these aspects in a coherent and interchangeable manner appears to be achievable, but the most appropriate format is as yet unclear.
  9. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.01
    0.007970477 = product of:
      0.043837626 = sum of:
        0.032312773 = weight(_text_:software in 3966) [ClassicSimilarity], result of:
          0.032312773 = score(doc=3966,freq=4.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.24794699 = fieldWeight in 3966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
        0.011524852 = product of:
          0.034574557 = sum of:
            0.034574557 = weight(_text_:web in 3966) [ClassicSimilarity], result of:
              0.034574557 = score(doc=3966,freq=10.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.32250395 = fieldWeight in 3966, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3966)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  10. Qualität in der Inhaltserschließung (2021) 0.01
    0.0076746866 = product of:
      0.042210776 = sum of:
        0.037056707 = weight(_text_:und in 753) [ClassicSimilarity], result of:
          0.037056707 = score(doc=753,freq=54.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.5089658 = fieldWeight in 753, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
        0.0051540704 = product of:
          0.015462211 = sum of:
            0.015462211 = weight(_text_:web in 753) [ClassicSimilarity], result of:
              0.015462211 = score(doc=753,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.14422815 = fieldWeight in 753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=753)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Der 70. Band der BIPRA-Reihe beschäftigt sich mit der Qualität in der Inhaltserschließung im Kontext etablierter Verfahren und technologischer Innovationen. Treffen heterogene Erzeugnisse unterschiedlicher Methoden und Systeme aufeinander, müssen minimale Anforderungen an die Qualität der Inhaltserschließung festgelegt werden. Die Qualitätsfrage wird zurzeit in verschiedenen Zusammenhängen intensiv diskutiert und im vorliegenden Band aufgegriffen. In diesem Themenfeld aktive Autor:innen beschreiben aus ihrem jeweiligen Blickwinkel unterschiedliche Aspekte zu Metadaten, Normdaten, Formaten, Erschließungsverfahren und Erschließungspolitik. Der Band versteht sich als Handreichung und Anregung für die Diskussion um die Qualität in der Inhaltserschließung.
    Content
    Inhalt: Editorial - Michael Franke-Maier, Anna Kasprzik, Andreas Ledl und Hans Schürmann Qualität in der Inhaltserschließung - Ein Überblick aus 50 Jahren (1970-2020) - Andreas Ledl Fit for Purpose - Standardisierung von inhaltserschließenden Informationen durch Richtlinien für Metadaten - Joachim Laczny Neue Wege und Qualitäten - Die Inhaltserschließungspolitik der Deutschen Nationalbibliothek - Ulrike Junger und Frank Scholze Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata - Lydia Pintscher, Peter Bourgonje, Julián Moreno Schneider, Malte Ostendorff und Georg Rehm Qualitätssicherung in der GND - Esther Scheven Qualitätskriterien und Qualitätssicherung in der inhaltlichen Erschließung - Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) Coli-conc - Eine Infrastruktur zur Nutzung und Erstellung von Konkordanzen - Uma Balakrishnan, Stefan Peters und Jakob Voß Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten - Clemens Neudecker, Karolina Zaczynska, Konstantin Baierer, Georg Rehm, Mike Gerber und Julián Moreno Schneider Datenqualität als Grundlage qualitativer Inhaltserschließung - Jakob Voß Bemerkungen zu der Qualitätsbewertung von MARC-21-Datensätzen - Rudolf Ungváry und Péter Király Named Entity Linking mit Wikidata und GND - Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten - Sina Menzel, Hannes Schnaitter, Josefine Zinck, Vivien Petras, Clemens Neudecker, Kai Labusch, Elena Leitner und Georg Rehm Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) - Fabian Steeg und Adrian Pohl Verbale Erschließung in Katalogen und Discovery-Systemen - Überlegungen zur Qualität - Heidrun Wiesenmüller Inhaltserschließung für Discovery-Systeme gestalten - Jan Frederik Maas Evaluierung von Verschlagwortung im Kontext des Information Retrievals - Christian Wartena und Koraljka Golub Die Qualität der Fremddatenanreicherung FRED - Cyrus Beck Quantität als Qualität - Was die Verbünde zur Verbesserung der Inhaltserschließung beitragen können - Rita Albrecht, Barbara Block, Mathias Kratzer und Peter Thiessen Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung - Harald Sack
    Footnote
    Vgl.: https://www.degruyter.com/document/doi/10.1515/9783110691597/html. DOI: https://doi.org/10.1515/9783110691597. Rez. in: Information - Wissenschaft und Praxis 73(2022) H.2-3, S.131-132 (B. Lorenz u. V. Steyer). Weitere Rezension in: o-bib 9(20229 Nr.3. (Martin Völkl) [https://www.o-bib.de/bib/article/view/5843/8714].
    Series
    Bibliotheks- und Informationspraxis; 70
  11. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.01
    0.0074598817 = product of:
      0.04102935 = sum of:
        0.028273677 = weight(_text_:software in 97) [ClassicSimilarity], result of:
          0.028273677 = score(doc=97,freq=4.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.21695362 = fieldWeight in 97, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
        0.012755673 = product of:
          0.03826702 = sum of:
            0.03826702 = weight(_text_:web in 97) [ClassicSimilarity], result of:
              0.03826702 = score(doc=97,freq=16.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.35694647 = fieldWeight in 97, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=97)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
    Footnote
    Vgl. auch: Devadason, F.J.: Facet analysis and Semantic Web: musings of a student of Ranganathan. Unter: http://www.geocities.com/devadason.geo/FASEMWEB.html#FacetedIndex.
  12. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.0072877435 = product of:
      0.04008259 = sum of:
        0.017828917 = weight(_text_:und in 611) [ClassicSimilarity], result of:
          0.017828917 = score(doc=611,freq=2.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.24487628 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.02225367 = product of:
          0.04450734 = sum of:
            0.04450734 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.04450734 = score(doc=611,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
  13. Wätjen, H.-J.: GERHARD : Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web (1998) 0.01
    0.0068574836 = product of:
      0.037716158 = sum of:
        0.024960482 = weight(_text_:und in 3064) [ClassicSimilarity], result of:
          0.024960482 = score(doc=3064,freq=8.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.34282678 = fieldWeight in 3064, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3064)
        0.012755673 = product of:
          0.03826702 = sum of:
            0.03826702 = weight(_text_:web in 3064) [ClassicSimilarity], result of:
              0.03826702 = score(doc=3064,freq=4.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.35694647 = fieldWeight in 3064, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3064)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    Die intellektuelle Erschließung des Internet befindet sich in einer Krise. Yahoo und andere Dienste können mit dem Wachstum des Web nicht mithalten. GERHARD ist derzeit weltweit der einzige Such- und Navigationsdienst, der die mit einem Roboter gesammelten Internetressourcen mit computerlinguistischen und statistischen Verfahren auch automatisch vollständig klassifiziert. Weit über eine Million HTML-Dokumente von wissenschaftlich relevanten Servern in Deutschland können wie bei anderen Suchmaschinen in der Datenbank gesucht, aber auch über die Navigation in der dreisprachigen Universalen Dezimalklassifikation (ETH-Bibliothek Zürich) recherchiert werden
  14. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.006762542 = product of:
      0.03719398 = sum of:
        0.021616412 = weight(_text_:und in 4321) [ClassicSimilarity], result of:
          0.021616412 = score(doc=4321,freq=6.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.2968967 = fieldWeight in 4321, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4321)
        0.015577569 = product of:
          0.031155137 = sum of:
            0.031155137 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.031155137 = score(doc=4321,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  15. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.0061121485 = product of:
      0.033616815 = sum of:
        0.018039247 = product of:
          0.05411774 = sum of:
            0.05411774 = weight(_text_:web in 88) [ClassicSimilarity], result of:
              0.05411774 = score(doc=88,freq=8.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.50479853 = fieldWeight in 88, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.33333334 = coord(1/3)
        0.015577569 = product of:
          0.031155137 = sum of:
            0.031155137 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.031155137 = score(doc=88,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  16. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.0058874274 = product of:
      0.03238085 = sum of:
        0.014577913 = product of:
          0.04373374 = sum of:
            0.04373374 = weight(_text_:web in 2871) [ClassicSimilarity], result of:
              0.04373374 = score(doc=2871,freq=4.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.4079388 = fieldWeight in 2871, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.33333334 = coord(1/3)
        0.017802935 = product of:
          0.03560587 = sum of:
            0.03560587 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.03560587 = score(doc=2871,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  17. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.01
    0.0057007954 = product of:
      0.031354375 = sum of:
        0.020195484 = weight(_text_:software in 2467) [ClassicSimilarity], result of:
          0.020195484 = score(doc=2467,freq=4.0), product of:
            0.1303213 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03285009 = queryNorm
            0.15496688 = fieldWeight in 2467, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.01115889 = product of:
          0.03347667 = sum of:
            0.03347667 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
              0.03347667 = score(doc=2467,freq=24.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.3122631 = fieldWeight in 2467, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  18. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.0056727305 = product of:
      0.031200016 = sum of:
        0.015622447 = product of:
          0.04686734 = sum of:
            0.04686734 = weight(_text_:web in 1673) [ClassicSimilarity], result of:
              0.04686734 = score(doc=1673,freq=6.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.43716836 = fieldWeight in 1673, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.33333334 = coord(1/3)
        0.015577569 = product of:
          0.031155137 = sum of:
            0.031155137 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.031155137 = score(doc=1673,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
  19. Wätjen, H.-J.: Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web : das DFG-Projekt GERHARD (1998) 0.01
    0.0055843806 = product of:
      0.030714093 = sum of:
        0.017828917 = weight(_text_:und in 3066) [ClassicSimilarity], result of:
          0.017828917 = score(doc=3066,freq=2.0), product of:
            0.072807856 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03285009 = queryNorm
            0.24487628 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
        0.012885176 = product of:
          0.038655527 = sum of:
            0.038655527 = weight(_text_:web in 3066) [ClassicSimilarity], result of:
              0.038655527 = score(doc=3066,freq=2.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.36057037 = fieldWeight in 3066, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3066)
          0.33333334 = coord(1/3)
      0.18181819 = coord(2/11)
    
  20. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.01
    0.0052389842 = product of:
      0.028814413 = sum of:
        0.015462211 = product of:
          0.046386633 = sum of:
            0.046386633 = weight(_text_:web in 4190) [ClassicSimilarity], result of:
              0.046386633 = score(doc=4190,freq=8.0), product of:
                0.10720661 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03285009 = queryNorm
                0.43268442 = fieldWeight in 4190, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.33333334 = coord(1/3)
        0.013352201 = product of:
          0.026704403 = sum of:
            0.026704403 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.026704403 = score(doc=4190,freq=2.0), product of:
                0.11503542 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03285009 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
      0.18181819 = coord(2/11)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
    Object
    Nordic Web Index

Years

Languages

  • e 70
  • d 45
  • nl 1
  • More… Less…

Types

  • a 97
  • el 11
  • m 4
  • s 4
  • h 2
  • x 2
  • p 1
  • More… Less…