Search (109 results, page 1 of 6)

  • × language_ss:"e"
  • × type_ss:"s"
  • × year_i:[2000 TO 2010}
  1. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.08
    0.07983807 = product of:
      0.31935227 = sum of:
        0.07119748 = weight(_text_:230 in 4319) [ClassicSimilarity], result of:
          0.07119748 = score(doc=4319,freq=4.0), product of:
            0.13547163 = queryWeight, product of:
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.02013827 = queryNorm
            0.5255527 = fieldWeight in 4319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.060652044 = weight(_text_:software in 4319) [ClassicSimilarity], result of:
          0.060652044 = score(doc=4319,freq=24.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.75917953 = fieldWeight in 4319, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.060652044 = weight(_text_:software in 4319) [ClassicSimilarity], result of:
          0.060652044 = score(doc=4319,freq=24.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.75917953 = fieldWeight in 4319, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.060652044 = weight(_text_:software in 4319) [ClassicSimilarity], result of:
          0.060652044 = score(doc=4319,freq=24.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.75917953 = fieldWeight in 4319, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.06619864 = product of:
          0.13239728 = sum of:
            0.13239728 = weight(_text_:engineering in 4319) [ClassicSimilarity], result of:
              0.13239728 = score(doc=4319,freq=34.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                1.2237091 = fieldWeight in 4319, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4319)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Classification
    ST 230
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
    LCSH
    Communications Engineering, Networks
    Software Engineering/Programming and Operating Systems
    Software engineering
    RSWK
    Computerarchitektur / Software Engineering / Telekommunikation / Online-Publikation
    RVK
    ST 230
    Subject
    Computerarchitektur / Software Engineering / Telekommunikation / Online-Publikation
    Communications Engineering, Networks
    Software Engineering/Programming and Operating Systems
    Software engineering
  2. Handbuch Internet-Suchmaschinen [1] : Nutzerorientierung in Wissenschaft und Praxis (2009) 0.06
    0.059998613 = product of:
      0.14999653 = sum of:
        0.010003355 = weight(_text_:23 in 329) [ClassicSimilarity], result of:
          0.010003355 = score(doc=329,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.13859524 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.048943985 = weight(_text_:monographien in 329) [ClassicSimilarity], result of:
          0.048943985 = score(doc=329,freq=4.0), product of:
            0.13425075 = queryWeight, product of:
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.02013827 = queryNorm
            0.36457142 = fieldWeight in 329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.010003355 = weight(_text_:23 in 329) [ClassicSimilarity], result of:
          0.010003355 = score(doc=329,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.13859524 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.019877441 = weight(_text_:und in 329) [ClassicSimilarity], result of:
          0.019877441 = score(doc=329,freq=54.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.44534507 = fieldWeight in 329, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.010003355 = weight(_text_:23 in 329) [ClassicSimilarity], result of:
          0.010003355 = score(doc=329,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.13859524 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.015202593 = product of:
          0.030405186 = sum of:
            0.030405186 = weight(_text_:allgemein in 329) [ClassicSimilarity], result of:
              0.030405186 = score(doc=329,freq=4.0), product of:
                0.10581345 = queryWeight, product of:
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.02013827 = queryNorm
                0.28734708 = fieldWeight in 329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=329)
          0.5 = coord(1/2)
        0.020913143 = weight(_text_:methoden in 329) [ClassicSimilarity], result of:
          0.020913143 = score(doc=329,freq=2.0), product of:
            0.10436003 = queryWeight, product of:
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.02013827 = queryNorm
            0.20039418 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
        0.015049307 = weight(_text_:der in 329) [ClassicSimilarity], result of:
          0.015049307 = score(doc=329,freq=30.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.33454654 = fieldWeight in 329, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=329)
      0.4 = coord(8/20)
    
    Abstract
    In diesem Handbuch steht die Nutzerorientierung im Vordergrund. Namhafte Autoren aus Wissenschaft und Praxis beschäftigen sich in 16 Kapiteln mit Web-Suchmaschinen, die die Vorreiter hinsichtlich des sich verändernden Nutzerverhaltens sind. Das bei Google und Co. erlernte Verhalten wird auf andere Suchsysteme übertragen: die Website-Suche, die Intranet-Suche und die Suche in Spezialsuchmaschinen und Fachdatenbanken. Für alle Anbieter von Informationssystemen wird es zunehmend wichtig, einerseits die Arbeitsweise von Suchmaschinen zu kennen, andererseits mit dem Verhalten Ihrer Nutzer vertraut zu sein. Auf der Seite der Wissenschaftler werden Informatiker, Informationswissenschaftler, Medienwissenschaftler und Bibliothekswissenschaftler angesprochen. Für Entwickler bietet dieses Handbuch einen Überblick über Möglichkeiten für Suchsysteme, gibt Anregungen für Umsetzung und zeigt anhand von bestehenden Lösungen, wie eine Umsetzung aussehen kann. Für Entscheider, Rechercheure und Informationsvermittler bietet das Buch lesbare Überblicksartikel zu relevanten Themenbereichen, auf deren Basis eine Strategie für individuelle Suchlösungen erarbeitet werden kann. Als dritte Praktiker gruppe sind u.a. Berater, Lehrer, Journalisten und Politiker zu nennen, die sich zu den wichtigsten Themen rund um die Suche informieren möchten.
    Classification
    ST 205 Informatik / Monographien / Vernetzung, verteilte Systeme / Internet allgemein
    Content
    I. Suchmaschinenlandschaft Der Markt für Internet-Suchmaschinen - Christian Maaß, Andre Skusa, Andreas Heß und Gotthard Pietsch Typologie der Suchdienste im Internet - Joachim Griesbaum, Bernard Bekavac und Marc Rittberger Spezialsuchmaschinen - Dirk Lewandowski Suchmaschinenmarketing - Carsten D. Schultz II. Suchmaschinentechnologie Ranking-Verfahren für Web-Suchmaschinen - Philipp Dopichaj Programmierschnittstellen der kommerziellen Suchmaschinen - Fabio Tosques und Philipp Mayr Personalisierung der Internetsuche - Lösungstechniken und Marktüberblick - Kai Riemer und Fabian Brüggemann III. Nutzeraspekte Methoden der Erhebung von Nutzerdaten und ihre Anwendung in der Suchmaschinenforschung - Nadine Höchstötter Standards der Ergebnispräsentation - Dirk Lewandowski und Nadine Höchstötter Universal Search - Kontextuelle Einbindung von Ergebnissen unterschiedlicher Quellen und Auswirkungen auf das User Interface - Sonja Quirmbach Visualisierungen bei Internetsuchdiensten - Thomas Weinhold, Bernard Bekavac, Sonja Hierl, Sonja Öttl und Josef Herget IV. Recht und Ethik Datenschutz bei Suchmaschinen - Thilo Weichert Moral und Suchmaschinen - Karsten Weber V. Vertikale Suche Enterprise Search - Suchmaschinen für Inhalte im Unternehmen - Julian Bahrs Wissenschaftliche Dokumente in Suchmaschinen - Dirk Pieper und Sebastian Wolf Suchmaschinen für Kinder - Maria Zens, Friederike Silier und Otto Vollmers
    Date
    17. 9.2018 18:23:58
    Footnote
    Vgl. auch: http://www.bui.haw-hamburg.de/164.html (Elektronische Ressource) Rez. in: IWP 60(2009) H.3, S.177-178 (L. Weisel): "Mit dem vorliegenden Handbuch will der Herausgeber, Prof. Dr. Dirk Lewandowksi von der Hochschule für Angewandte Wissenschaften Hamburg, nach eigenen Worten eine Lücke füllen. Er hat renommierte Autoren aus unterschiedlichen Fachcommunities aufgerufen, zu dem Thema "Suchmaschinen im Internet" ihre unterschiedlichen Perspektiven in Form von Übersichtsartikeln zusammenzuführen. So möchte er mit diesem Band den Austausch zwischen den Communities sowie zwischen Wissenschaftlern und Praktikern fördern. . . . Empfehlung Dem Handbuch "Internet-Suchmaschinen" wird eine breite Leserschaft aus Wissenschaft und Praxis des Suchens und Findens im Web gewünscht, es sollte bei allen Einrichtungen für die Ausbildung des Fachnachwuchses zum Repertoire gehören, um diesen kritisch an die Thematik heranzuführen. Das gedruckte Werk wird der Aktualität und dem Wandel in diesem sehr dynamischen Fachgebiet Tribut zollen müssen. Statt einer zeitnahen Zweitausgabe sei dem Herausgeber und dem Verlag hier der Weg der kontinuierlichen Ergänzung empfohlen: um die oben genannten fehlenden Beiträge, aber auch sich neu ententwickelnde Inhalte - in Form eines lebendigen Lehrbuches -auf der geeigneten elektronischen Plattform."
    RVK
    ST 205 Informatik / Monographien / Vernetzung, verteilte Systeme / Internet allgemein
  3. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.03
    0.031719923 = product of:
      0.10573307 = sum of:
        0.048943985 = weight(_text_:monographien in 1981) [ClassicSimilarity], result of:
          0.048943985 = score(doc=1981,freq=4.0), product of:
            0.13425075 = queryWeight, product of:
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.02013827 = queryNorm
            0.36457142 = fieldWeight in 1981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.012256115 = weight(_text_:software in 1981) [ClassicSimilarity], result of:
          0.012256115 = score(doc=1981,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.012256115 = weight(_text_:software in 1981) [ClassicSimilarity], result of:
          0.012256115 = score(doc=1981,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.015202593 = product of:
          0.030405186 = sum of:
            0.030405186 = weight(_text_:allgemein in 1981) [ClassicSimilarity], result of:
              0.030405186 = score(doc=1981,freq=4.0), product of:
                0.10581345 = queryWeight, product of:
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.02013827 = queryNorm
                0.28734708 = fieldWeight in 1981, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1981)
          0.5 = coord(1/2)
        0.004818143 = product of:
          0.009636286 = sum of:
            0.009636286 = weight(_text_:29 in 1981) [ClassicSimilarity], result of:
              0.009636286 = score(doc=1981,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.13602862 = fieldWeight in 1981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1981)
          0.5 = coord(1/2)
        0.012256115 = weight(_text_:software in 1981) [ClassicSimilarity], result of:
          0.012256115 = score(doc=1981,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.3 = coord(6/20)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Classification
    ST 205 Informatik / Monographien / Vernetzung, verteilte Systeme / Internet allgemein
    Date
    29. 3.1996 18:16:49
    RVK
    ST 205 Informatik / Monographien / Vernetzung, verteilte Systeme / Internet allgemein
  4. ¬The Semantic Web : research and applications ; second European Semantic WebConference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005 ; proceedings (2005) 0.03
    0.031161685 = product of:
      0.12464674 = sum of:
        0.029713312 = weight(_text_:software in 439) [ClassicSimilarity], result of:
          0.029713312 = score(doc=439,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.3719205 = fieldWeight in 439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=439)
        0.029713312 = weight(_text_:software in 439) [ClassicSimilarity], result of:
          0.029713312 = score(doc=439,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.3719205 = fieldWeight in 439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=439)
        0.008259674 = product of:
          0.016519347 = sum of:
            0.016519347 = weight(_text_:29 in 439) [ClassicSimilarity], result of:
              0.016519347 = score(doc=439,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.23319192 = fieldWeight in 439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=439)
          0.5 = coord(1/2)
        0.029713312 = weight(_text_:software in 439) [ClassicSimilarity], result of:
          0.029713312 = score(doc=439,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.3719205 = fieldWeight in 439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=439)
        0.027247135 = product of:
          0.05449427 = sum of:
            0.05449427 = weight(_text_:engineering in 439) [ClassicSimilarity], result of:
              0.05449427 = score(doc=439,freq=4.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.5036745 = fieldWeight in 439, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.046875 = fieldNorm(doc=439)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    LCSH
    Software engineering
    Subject
    Software engineering
  5. International yearbook of library and information management : 2001/2002 information services in an electronic environment (2001) 0.03
    0.026554614 = product of:
      0.13277307 = sum of:
        0.04001342 = weight(_text_:23 in 1381) [ClassicSimilarity], result of:
          0.04001342 = score(doc=1381,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 1381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=1381)
        0.04001342 = weight(_text_:23 in 1381) [ClassicSimilarity], result of:
          0.04001342 = score(doc=1381,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 1381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=1381)
        0.04001342 = weight(_text_:23 in 1381) [ClassicSimilarity], result of:
          0.04001342 = score(doc=1381,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 1381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=1381)
        0.012732802 = product of:
          0.038198404 = sum of:
            0.038198404 = weight(_text_:22 in 1381) [ClassicSimilarity], result of:
              0.038198404 = score(doc=1381,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.5416616 = fieldWeight in 1381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1381)
          0.33333334 = coord(1/3)
      0.2 = coord(4/20)
    
    Date
    25. 3.2003 13:22:23
  6. Advances in librarianship (2000) 0.03
    0.025464386 = product of:
      0.16976257 = sum of:
        0.05658752 = weight(_text_:23 in 4697) [ClassicSimilarity], result of:
          0.05658752 = score(doc=4697,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.78401303 = fieldWeight in 4697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=4697)
        0.05658752 = weight(_text_:23 in 4697) [ClassicSimilarity], result of:
          0.05658752 = score(doc=4697,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.78401303 = fieldWeight in 4697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=4697)
        0.05658752 = weight(_text_:23 in 4697) [ClassicSimilarity], result of:
          0.05658752 = score(doc=4697,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.78401303 = fieldWeight in 4697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=4697)
      0.15 = coord(3/20)
    
    Issue
    Vol.23.
    Signature
    78 BAHH 1089-23
  7. Semantic Web services challenge : results from the first year (2009) 0.02
    0.022639439 = product of:
      0.090557754 = sum of:
        0.021010485 = weight(_text_:software in 2479) [ClassicSimilarity], result of:
          0.021010485 = score(doc=2479,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 2479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2479)
        0.021010485 = weight(_text_:software in 2479) [ClassicSimilarity], result of:
          0.021010485 = score(doc=2479,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 2479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2479)
        0.008259674 = product of:
          0.016519347 = sum of:
            0.016519347 = weight(_text_:29 in 2479) [ClassicSimilarity], result of:
              0.016519347 = score(doc=2479,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.23319192 = fieldWeight in 2479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2479)
          0.5 = coord(1/2)
        0.021010485 = weight(_text_:software in 2479) [ClassicSimilarity], result of:
          0.021010485 = score(doc=2479,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 2479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2479)
        0.019266631 = product of:
          0.038533263 = sum of:
            0.038533263 = weight(_text_:engineering in 2479) [ClassicSimilarity], result of:
              0.038533263 = score(doc=2479,freq=2.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.35615164 = fieldWeight in 2479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2479)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Abstract
    Service-Oriented Computing is one of the most promising software engineering trends for future distributed systems. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. Yet a common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their abilities and shortcomings, is still missing. "Semantic Web Services Challenge" is an edited volume that develops this common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. "Semantic Web Services Challenge" is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
    Date
    13.12.2008 11:34:29
  8. Software for Indexing (2003) 0.02
    0.022284985 = product of:
      0.14856656 = sum of:
        0.049522188 = weight(_text_:software in 2294) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2294,freq=64.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2294, product of:
              8.0 = tf(freq=64.0), with freq of:
                64.0 = termFreq=64.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
        0.049522188 = weight(_text_:software in 2294) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2294,freq=64.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2294, product of:
              8.0 = tf(freq=64.0), with freq of:
                64.0 = termFreq=64.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
        0.049522188 = weight(_text_:software in 2294) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2294,freq=64.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2294, product of:
              8.0 = tf(freq=64.0), with freq of:
                64.0 = termFreq=64.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
      0.15 = coord(3/20)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.115-116 (C. Jacobs): "This collection of articles by indexing practitioners, software designers and vendors is divided into five sections: Dedicated Software, Embedded Software, Online and Web Indexing Software, Database and Image Software, and Voice-activated, Automatic, and Machine-aided Software. This diversity is its strength. Part 1 is introduced by two chapters an choosing dedicated software, highlighting the issues involved and providing tips an evaluating requirements. The second chapter includes a fourteen page chart that analyzes the attributes of Authex Plus, three versions of CINDEX 1.5, MACREX 7, two versions of SKY Index (5.1 and 6.0) and wINDEX. The lasting value in this chart is its utility in making the prospective user aware of the various attributes/capabilities that are possible and that should be considered. The following chapters consist of 16 testimonials for these software packages, completed by a final chapter an specialized/customized software. The point is made that if a particular software function could increase your efficiency, it can probably be created. The chapters in Part 2, Embedded Software, go into a great deal more detail about how the programs work, and are less reviews than illustrations of functionality. Perhaps this is because they are not really stand-alones, but are functions within, or add-ons used with larger word processing or publishing programs. The software considered are Microsoft Word, FrameMaker, PageMaker, IndexTension 3.1.5 that is used with QuarkXPress, and Index Tools Professional and IXgen that are used with FrameMaker. The advantages and disadvantages of embedded indexing are made very clear, but the actual illustrations are difficult to follow if one has not worked at all with embedded software. Nonetheless, the section is valuable as it highlights issues and provides pointers an solutions to embedded indexing problems.
    Part 3, Online and Web Indexing Software, opens with a chapter in which the functionalities of HTML/Prep, HTML Indexer, and RoboHELP HTML Edition are compared. The following three chapters look at them individually. This section helps clarify the basic types of non-database web indexing - that used for back-of-the-book style indexes, and that used for online help indexes. The first chapter of Part 4, Database and image software, begins with a good discussion of what database indexing is, but falls to carry through with any listing of general characteristics, problems and attributes that should be considered when choosing database indexing software. It does include the results of an informal survey an the Yahoogroups database indexing site, as well as three short Gase studies an database indexing projects. The survey provides interesting information about freelancing, but it is not very useful if you are trying to gather information about different software. For example, the most common type of software used by those surveyed turns out to be word-processing software. This seems an odd/awkward choice, and it would have been helpful to know how and why the non-specialized software is being used. The survey serves as a snapshot of a particular segment of database indexing practice, but is not helpful if you are thinking about purchasing, adapting, or commissioning software. The three case studies give an idea of the complexity of database indexing and there is a helpful bibliography.
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
  9. Information science in transition (2009) 0.02
    0.021935431 = product of:
      0.06267266 = sum of:
        0.012380547 = weight(_text_:software in 634) [ClassicSimilarity], result of:
          0.012380547 = score(doc=634,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15496688 = fieldWeight in 634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.0077285054 = weight(_text_:und in 634) [ClassicSimilarity], result of:
          0.0077285054 = score(doc=634,freq=16.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.17315367 = fieldWeight in 634, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.012380547 = weight(_text_:software in 634) [ClassicSimilarity], result of:
          0.012380547 = score(doc=634,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15496688 = fieldWeight in 634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.0076784696 = product of:
          0.015356939 = sum of:
            0.015356939 = weight(_text_:allgemein in 634) [ClassicSimilarity], result of:
              0.015356939 = score(doc=634,freq=2.0), product of:
                0.10581345 = queryWeight, product of:
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.02013827 = queryNorm
                0.1451322 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.254347 = idf(docFreq=627, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.5 = coord(1/2)
        0.007850328 = weight(_text_:der in 634) [ClassicSimilarity], result of:
          0.007850328 = score(doc=634,freq=16.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.17451303 = fieldWeight in 634, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.012380547 = weight(_text_:software in 634) [ClassicSimilarity], result of:
          0.012380547 = score(doc=634,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15496688 = fieldWeight in 634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=634)
        0.0022737146 = product of:
          0.006821144 = sum of:
            0.006821144 = weight(_text_:22 in 634) [ClassicSimilarity], result of:
              0.006821144 = score(doc=634,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.09672529 = fieldWeight in 634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=634)
          0.33333334 = coord(1/3)
      0.35 = coord(7/20)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Content
    Inhalt: Fifty years of UK research in information science - Jack Meadows / Smoother pebbles and the shoulders of giants: the developing foundations of information science - David Bawden / The last 50 years of knowledge organization: a journey through my personal archives - Stella G. Dextre Clarke / On the history of evaluation in IR - Stephen Robertson / The information user: past, present and future - Tom Wilson / The sociological turn in information science - Blaise Cronin / From chemical documentation to chemoinformatics: 50 years of chemical information science - Peter Willett / Health informatics: current issues and challenges - Peter A. Bath / Social informatics and sociotechnical research - a view from the UK - Elisabeth Davenport / The evolution of visual information retrieval - Peter Enser / Information policies: yesterday, today, tomorrow - Elizabeth Orna / The disparity in professional qualifications and progress in information handling: a European perspective - Barry Mahon / Electronic scholarly publishing and Open Access - Charles Oppenheim / Social software: fun and games, or business tools ? - Wendy A. Warr / Bibliometrics to webometrics - Mike Thelwall / How I learned to love the Brits - Eugene Garfield
    Date
    22. 2.2013 11:35:35
    Footnote
    Rez. in: Mitt VÖB 62(2009) H.3, S.95-99 (O. Oberhauser): "Dieser ansehnliche Band versammelt 16 Beiträge und zwei Editorials, die bereits 2008 als Sonderheft des Journal of Information Science erschienen sind - damals aus Anlass des 50. Jahrestages der Gründung des seit 2002 nicht mehr selbständig existierenden Institute of Information Scientists (IIS). Allgemein gesprochen, reflektieren die Aufsätze den Stand der Informationswissenschaft (IW) damals, heute und im Verlauf dieser 50 Jahre, mit Schwerpunkt auf den Entwicklungen im Vereinigten Königreich. Bei den Autoren der Beiträge handelt es sich um etablierte und namhafte Vertreter der britischen Informationswissenschaft und -praxis - die einzige Ausnahme ist Eugene Garfield (USA), der den Band mit persönlichen Reminiszenzen beschließt. Mit der nunmehrigen Neuauflage dieser Kollektion als Hardcover-Publikation wollten Herausgeber und Verlag vor allem einen weiteren Leserkreis erreichen, aber auch den Bibliotheken, die die erwähnte Zeitschrift im Bestand haben, die Möglichkeit geben, das Werk zusätzlich als Monographie zur Aufstellung zu bringen. . . . Bleibt die Frage, ob eine neuerliche Publikation als Buch gerechtfertigt ist. Inhaltlich besticht der Band ohne jeden Zweifel. Jeder, der sich für Informationswissenschaft interessiert, wird von den hier vorzufindenden Texten profitieren. Und: Natürlich ist es praktisch, eine gediegene Buchpublikation in Händen zu halten, die in vielen Bibliotheken - im Gegensatz zum Zeitschriftenband - auch ausgeliehen werden kann. Alles andere ist eigentlich nur eine Frage des Budgets." Weitere Rez. in IWP 61(2010) H.2, S.148 (L. Weisel); JASIST 61(2010) no.7, S.1505 (M. Buckland); KO 38(2011) no.2, S.171-173 (P. Matthews): "Armed then with tools and techniques often applied to the structural analysis of other scientific fields, this volume frequently sees researchers turning this lens on themselves and ranges in tone from the playfully reflexive to the (parentally?) overprotective. What is in fact revealed is a rather disparate collection of research areas, all making a valuable contribution to our understanding of the nature of information. As is perhaps the tendency with overzealous lumpers (see http://en.wikipedia.org/wiki/Lumpers_and_splitters), some attempts to bring these areas together seem a little forced. The splitters help draw attention to quite distinct specialisms, IS's debts to other fields, and the ambition of some emerging subfields to take up intellectual mantles established elsewhere. In the end, the multidisciplinary nature of information science shines through. With regard to future directions, the subsumption of IS into computer science is regarded as in many ways inevitable, although there is consensus that the distinct infocentric philosophy and outlook which has evolved within IS is something to be retained." Weitere Rez. in: KO 39(2012) no.6, S.463-465 (P. Matthews)
    RSWK
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Subject
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
  10. FRBR: hype, or cure-all? (2004) 0.02
    0.016240073 = product of:
      0.06496029 = sum of:
        0.017508736 = weight(_text_:software in 5862) [ClassicSimilarity], result of:
          0.017508736 = score(doc=5862,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 5862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5862)
        0.017508736 = weight(_text_:software in 5862) [ClassicSimilarity], result of:
          0.017508736 = score(doc=5862,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 5862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5862)
        0.00555102 = weight(_text_:der in 5862) [ClassicSimilarity], result of:
          0.00555102 = score(doc=5862,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.12339935 = fieldWeight in 5862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5862)
        0.0068830615 = product of:
          0.013766123 = sum of:
            0.013766123 = weight(_text_:29 in 5862) [ClassicSimilarity], result of:
              0.013766123 = score(doc=5862,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.19432661 = fieldWeight in 5862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5862)
          0.5 = coord(1/2)
        0.017508736 = weight(_text_:software in 5862) [ClassicSimilarity], result of:
          0.017508736 = score(doc=5862,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 5862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5862)
      0.25 = coord(5/20)
    
    Content
    Inhalt: Introduction by Patrick Le Boeuf; The Origins of the IFLA Study on Functional Requirements of Bibliographic Records by Olivia M. A. Madison; Extending FRBR to Authorities by Glenn E. Patton; Modeling Subject Access Extending the FRBR and FRANAR Conceptual Models by Tom Delsey; Towards an implementation model for library catalogs using semantic web technology by Stefan Gradmann; Cataloguing of hand press materials and the concept of expression in FRBR by Gunilla Jonsson; The AustLit Gateway and Scholarly Bibliography: A Specialist Implementation of the FRBR by Kerry Kilner; Musical works in the FRBR model or "Quasi la stessa cosa": variations on a theme by Umberto Eco by Patrick Le Boeuf; PARADIGMA: FRBR and Digital Documents by Ketil Albertsen, Carol van Nuys; "Such stuff as dreams are made on": How does FRBR fit performing arts? by David Miller, Patrick Le Boeuf; Folklore Requirements for Bibliographic Records: Oral Traditions and FRBR by Yann Nicolas; FRBR and Cataloging for the Future by Barbara B. Tillett; Slovenian cataloguing practice and Functional requirements for bibliographic records: a comparative analysis Zlata Dimec, Maja Zumer, Gerhard J.A. Riesthuis; Implementation of FRBR: European research initiative by Maja Zumer; FRBRizing OCLC's WorldCat by Thomas B. Hickey, Edward T. O'Neill; Implementing the FRBR conceptual approach in the ISIS software environment: IFPA (ISIS FRBR Prototype Application) by Roberto Sturman; FRBR Display Tool by Jackie Radebaugh and Corey Keith; XOBIS: an Experimental Schema for Unifying Bibliographic and Authority Records by Dick R. Miller
    Date
    5. 8.2006 19:29:09
    Footnote
    Vgl. die einzelnen Beiträge unter der Buchausgabe: Functional Requirements for Bibliographic Records (FRBR): hype or cure-all. Ed. by P. le Boeuf,. Binghamton, NY: Haworth 2004.
  11. Individual differences in virtual environments (2000) 0.02
    0.015433748 = product of:
      0.102891654 = sum of:
        0.034297217 = weight(_text_:23 in 4599) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4599,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4599)
        0.034297217 = weight(_text_:23 in 4599) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4599,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4599)
        0.034297217 = weight(_text_:23 in 4599) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4599,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4599)
      0.15 = coord(3/20)
    
    Date
    5. 4.2000 12:19:23
  12. Web intelligence: research and development : First Asia-Pacific Conference, WI 2001, Maebashi City, Japan, Oct. 23-26, 2001, Proceedings (2003) 0.01
    0.014311572 = product of:
      0.047705237 = sum of:
        0.0057162032 = weight(_text_:23 in 1832) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1832,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1832, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1832)
        0.0057162032 = weight(_text_:23 in 1832) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1832,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1832, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1832)
        0.011771709 = weight(_text_:und in 1832) [ClassicSimilarity], result of:
          0.011771709 = score(doc=1832,freq=58.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.26373982 = fieldWeight in 1832, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.015625 = fieldNorm(doc=1832)
        0.0057162032 = weight(_text_:23 in 1832) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1832,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1832, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1832)
        0.012362709 = weight(_text_:der in 1832) [ClassicSimilarity], result of:
          0.012362709 = score(doc=1832,freq=62.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.2748234 = fieldWeight in 1832, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.015625 = fieldNorm(doc=1832)
        0.006422211 = product of:
          0.012844422 = sum of:
            0.012844422 = weight(_text_:engineering in 1832) [ClassicSimilarity], result of:
              0.012844422 = score(doc=1832,freq=2.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.118717216 = fieldWeight in 1832, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1832)
          0.5 = coord(1/2)
      0.3 = coord(6/20)
    
    Footnote
    Rez. in: nfd - Information 54(2003) H.6, S.378-379 (T. Mandl): "Im Oktober 2001 fand erstmals eine Tagung mit dem Titel "Web Intelligence" statt. Ist dies nun eine neue Disziplin oder der Versuch analog zu "Artificial Intelligence" und "Computational Intelligence" ein neues Modewort zu kreieren? Geht es um den Einsatz sogenannter intelligenter Verfahren, um mit dem Internet umgehen zu können oder erscheint das Internet als "emerging global brain" (Goertzel 2002), also als eine unerschöpfliche Quelle von Wissen, die nur geschickt ausgebeutet werden muss? Kommt also die Intelligenz aus dem Web oder dient die Intelligenz als Werkzeug für das Web? Der Tagungsband ist seit Anfang 2003 verfügbar und bietet nun den Anlass, diesen Begriff anhand der darin präsentierten Inhalte zu bewerten. Die Herausgeber führen in ihrem einleitenden Artikel gleich die Abkürzung WI ein und versuchen tatsächlich "Web Intelligence" als neue Sub-Disziplin der Informatik zu etablieren. Zu diesem Zweck greifen sie auch auf die Anzahl der Nachweise für diese Phrase in Suchmaschinen zu. Zwar lieferten die Systeme angeblich Zahlen von über einer Million (S. 4), aber dies überzeugt sicher noch niemanden, das Studium der WI aufzunehmen. Allerdings weist dieses Vorgehen schon auf einen Kern der WI hin: man versucht, aus dem im Web gespeicherten Wissen neues Wissen zu generieren. Damit wäre man sehr nahe am Data oder eben Web-Mining, jedoch geht die Definition der Autoren darüber hinaus. Sie wollen WI verstanden wissen als die Anwendung von Künstlicher Intelligenz sowie Informationstechnologie im Internet (S. 2). Da nun Künstliche Intelligenz bei allen Meinungsverschiedenheiten sicherlich nicht ohne Informationstechnologie denkbar ist, wirkt die Definition nicht ganz überzeugend. Allerdings beschwichtigen die Autoren im gleichen Atemzug und versichern, diese Definition solle ohnehin keine Forschungsrichtung ausschließen. Somit bietet sich eher eine Umfangsdefinition an. Diese solle die wichtigsten Stoßrichtungen des Buchs und damit auch der Tagung umfassen. Als Ausgangspunkt dient dazu auch eine Liste der Herausgeber (S. 7f.), die hier aber etwas modifiziert wird: - Grundlagen von Web Informationssystemen (Protokolle, Technologien, Standards) - Web Information Retrieval, WebMining und Farming - Informationsmanagement unter WebBedingungen - Mensch-Maschine Interaktion unter Web-Bedingungen (hier "HumanMedia Engineering" S. XII) Eine grobe Einteilung wie diese ist zwar übersichtlich, führt aber zwangsläufig zu Ouerschnittsthemen. In diesem Fall zählt dazu das Semantic Web, an dem momentan sehr intensiv geforscht wird. Das Semantic Web will das Unbehagen mit der Anarchie im Netz und daraus folgenden Problemen für die Suchmaschinen überwinden, indem das gesamte Wissen im Web auch explizit als solches gekennzeichnet wird. Tauchen auf einer WebSeite zwei Namen auf und einer ist der des Autors und der andere der eines Sponsors, so erlauben neue Technologien, diese auch als solche zu bezeichnen. Noch wichtiger, wie in einer Datenbank sollen nun Abfragen möglich sein, welche andere Seiten aus dem Web liefen, die z.B. den gleichen Sponsor, aber einen anderen Autor haben. Dieser Thematik widmen sich etwa Hendler & Feigenbaum. Das Semantic Web stellt ein Ouerschnittsthema dar, da dafür neue Technologien (Mizoguchi) und ein neuartiges Informationsmanagement erforderlich sind (z.B. Stuckenschmidt & van Harmelen), die Suchverfahren angepasst werden und natürlich auch auf die Benutzer neue Herausforderungen zukommen. Diesem Aspekt, inwieweit Benutzer solche Anfragen überhaupt stellen werden, widmet sich in diesem Band übrigens niemand ernsthaft. Im Folgenden sollen die einzelnen Themengebiete anhand der im Band enthaltenen Inhalte näher bestimmt werden, bevor abschließend der Versuch eines Resümees erfolgt.
    - Grundlagen von Web Informationssystemen Protokolle, Technologien und Standards existieren inzwischen mannigfaltig und lediglich für spezifische Anwendungen entstehen weitere Grundlagen. In dem vorliegenden Band gibt es etwa ein Datenmodell für XML-Datenbanken (Wuwongse et al.) und den Vorschlag einer 3DModellierung (Hwang, Lee & Hwang). Auch für Proxy-Server werden neue Algorithmen entwickelt (Aguilar & Leiss). - Web Information Retrieval, WebMining und Farming Neben klassischen Themen des Information Retrieval wie kontrolliertem Vokabular (Sim & Wong), Ranking (Wang & Maguire), Kategorisierung (Loia & Luongo) und Term-Erweiterung (Huang, Oyang & Chien) stehen auch typische Web Information Retrieval Themen. Multimedia Retrieval spielt eine wichtige Rolle im Web und dazu gibt es Beiträge zu Audio (Wieczorkowska & Ra- Wan, Liu & Wang) und Grafiken (Fukumoto & Cho, Hwang, Lee & Hwang). Das Hype-Thema Link-Analyse schlägt auch gleich den Bogen hin zum Web-Mining, ist mit fünf Beiträgen aber eher unterrepräsentiert. Link-Analyse stellt die Frage, was sich aus den inzwischen wohl über zehn Milliarden Links im Internet folgern lässt. So extrahieren zwei Beiträge die zeitliche Veränderung sozialer Strukturen in Web Communities. Matsumura et al. untersuchen, ob Außenseiter sich auch für die innerhalb einer Community diskutierten Themen interessieren und werten dies als Maß für die Verbreitung des Themas. Bun & Ishizuka interessieren sich nur für die Änderungen innerhalb einer Gruppe von thematisch zusammengehörigen Web-Abgeboten und analysieren in diesem Korpus die wichtigsten Sätze, die neu entstehende Themen am besten repräsentieren. Andere Mining-Beiträge befassen sich mit der Erstellung von Sprachressourcen (Chau & Yeh). - Informationsmanagement unter WebBedingungen Für das Informationsmanagement gelten Ontologien zur Beschreibung des vorhandenen Wissens als wichtiges Instrument und dementsprechend ist "Ontologie" auch ein Kandidat für das höchst-frequente Wort in dem Tagungsband.
    Einen weiteren wichtigen Aspekt stellt nach wie vor E-Learning dar, das u.a. neue Anforderungen an die Erstellung und Verwaltung von Lernmodulen (Forcheri et al.) und die Zusammenarbeit von Lehrern und Schülern stellt (Hazeyama et al., Liu et al.). - Mensch-Maschine Interaktion unter Web-Bedingungen Benutzermodellierung (Estivill-Castro & Yang, Lee, Sung & Cho) hat mit der Popularität des Internet eine neue Dimension gewonnen und ist besonders im kommerziellen Umfeld sehr interessant. Eine Wissensquelle hierfür und für andere Anwendungen sind Log-Files (Yang et al.). Breiten Raum nehmen die Visualisierungen ein, die häufig für spezielle Benutzergruppen gedacht sind, wie etwa Data Mining Spezialisten (Han & Cercone) und Soziologen, die sich mit Web-Communities befassen (Sumi & Mase). Agenten (Lee) und Assistenten (Molina) als neue Formen der Interaktion treten nicht zuletzt für E-Commerce Anwendungen in Erscheinung. In diesem Kontext der Mensch-Medien-Beziehung soll das Ouerschnittsthema WebCommunities genannt werden, in dem die sozialen Aspekte der Kooperation (Hazeyama et al.) ebenso wie das Entdecken von Gruppenstrukturen (Bun & Ishizuka) untersucht werden. Dagegen kommen kaum empirische Evaluierungen vor, die belegen könnten, wie intelligent denn die Systeme nun sind. Worin liegt nun der Kern der Web Intelligence? Der Aspekt Web Mining befasst sich mit der Extraktion von Wissen aus dem riesigen Reservoir Internet während der Aspekt Web-Informationssysteme den Einsatz so genannter intelligenter Technologien in Informationssystemen im Internet behandelt. Da jedoch das Spektrum der eingesetzten Informationssysteme praktisch beliebig ist und auch die Auswahl der intelligenten Technologien keinen spezifischen Fokus erkennen lässt, stellt Web Intelligence momentan eher einen bunten Strauß dar als ein klar abgegrenztes Feld. Das Web taugt inzwischen kaum mehr zur Abgrenzung von Technologien. Die Beiträge sind stärker von den Communities der Autoren geprägt als von der Web Intelligence Community, die vielleicht noch gar nicht existiert. Wenn doch, so befindet sie sich in einem frühen Stadium, in dem sich wenig Gemeinsamkeiten zwischen den Arbeiten erkennen lassen. Allerdings macht die mangelnde Kohärenz die einzelnen Beiträge keineswegs uninteressant. Dieser Meinung sind offensichtlich auch die 57 Mitglieder des Programmkomitees der Tagung, unter denen auch drei deutsche Wissenschaftler sind. Denn für 2003 ist eine weitere Tagung geplant (http://www.comp.hkbu. edu.hk/WIo3/)."
  13. Research methods for students, academics and professionals : information management and systems (2002) 0.01
    0.014058577 = product of:
      0.04686192 = sum of:
        0.0057162032 = weight(_text_:23 in 1756) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1756,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
        0.0057162032 = weight(_text_:23 in 1756) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1756,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
        0.009904438 = weight(_text_:software in 1756) [ClassicSimilarity], result of:
          0.009904438 = score(doc=1756,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.123973496 = fieldWeight in 1756, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
        0.0057162032 = weight(_text_:23 in 1756) [ClassicSimilarity], result of:
          0.0057162032 = score(doc=1756,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.07919728 = fieldWeight in 1756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
        0.009904438 = weight(_text_:software in 1756) [ClassicSimilarity], result of:
          0.009904438 = score(doc=1756,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.123973496 = fieldWeight in 1756, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
        0.009904438 = weight(_text_:software in 1756) [ClassicSimilarity], result of:
          0.009904438 = score(doc=1756,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.123973496 = fieldWeight in 1756, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=1756)
      0.3 = coord(6/20)
    
    Date
    23. 3.2008 19:31:36
    Footnote
    Rez. in: JASIST 54(2003) no.10, S.982-983 (L. Schamber): "This book is the most recent of only about half a dozen research methods textbooks published for information science since 1980. Like the others, it is directed toward students and information professionals at an introductory level. Unlike the others, it describes an unusually wide variety of research methods, especially qualitative methods. This book is Australian, with a concern for human behavior in keeping with that country's reputation for research in the social sciences and development of qualitative data analysis software. The principal author is Kirsty Williamson, who wrote or co-wrote half the chapters. Eleven other authors contributed: Amanda Bow, Frada Burstein, Peta Darke, Ross Harvey, Graeme Johanson, Sue McKemmish, Majola Oosthuizen, Solveiga Saule, Don Schauder, Graeme Shanks, and Kerry Tanner. These writers, most of whom are affiliated with Monash University or Charles Sturt University, represent multidisciplinary and international backgrounds. The field they call information management and systems merges interests of information management or information studies (including librarianship, archives, and records management), and information systems, a subdiscipline of computing that focuses an information and communication technologies. The stated purpose of the book is to help information professionals become informed and critical consumers of research, not necessarily skilled researchers. It is geared toward explaining not only methodology, but also the philosophy, relevance, and process of research as a whole. The Introduction and Section 1 establish these themes. Chapter 1, an research and professional practice, explains the value of research for solving practical problems, maintaining effective Services, demonstrating accountability, and generally contributing to useful knowledge in the field. Chapter 2 an major research traditions presents a broad picture of positivist and interpretivist paradigms, along with a middle ground of post-positivism, in such a way as to help the new researcher grasp the assumptions underlying research. Woven into this Chapter is an explanation of how quantitative and qualitative methods complement each other, and how methodological triangulation provides confirmatory benefits. Chapter 3 offers instructions for beginning a research project, from development of the research problem, questions, and hypotheses to understanding the role of theory and synthesizing the literature review. Chapter 4 an research ethics covers unethical use of power positions by researchers, falsifying data, and plagiarism, along with general information an human subjects protections and roles of ethics committees. It includes intriguing examples of ethics cases to stimulate discussion.
    Sections 2 and 3 make a key distinction between research methods, which encompass the theories and purposes underlying research design, and research techniques, which are specific means for collectiog data. The rationale is that one research technique, such as interviewing, may be appropriate for more than one research method, such as survey or case study. In Section 2, eight chapters describe survey, case study, experimental, system development, action, ethnography, historical, and Delphi research methods. The methods progress roughly from most to least used in information science, and for the least used, the authors take pains to elucidate the means to achieving methodological rigor. Chapter 8 presents a noteworthy argument for legitimizing system development as a valid methodological approach within the larger content of information systems research. System development is seen as belonging to the cycle of theory to practice required to create effective information systems, a cycle that emphasizes human and social aspects as a necessary counterpoint to the obvious technological aspects. The four chapters in Section 3 discuss specific techniques that may be used with different methods. Chapter 13 an sampling summarizes probability and nonprobability sampling techniques and when they are appropriate. Chapter 14 describes the two most common data-collection techniques, questionnaires and Interviews, and Looks at their respective uses. Chapter 15 covers focus groups and Chapter 16 ethnographic techniques, including participant observation. Throughout Sections 2 and 3, attention is paid to the subtleties of collectiog data from people, such as ways to obtain access and avoid major types of biases. In Section 4 an data analysis, only Chapter 17 deals directly with analyzing quantitative and qualitative data. It does so in limited space by describing the general process for handling each type of data. This is followed by evaluating research publications in Chapter 18, which offers valuable advice for critically assessing studies that employ different methods. The last part of the book is a postscript with seven questions that invite readers to reflect an issues of focus and ethics, to become aware of their responsibility for approaching research conscientiously. Although these three parts together do not constitute a unified conclusion, each does provide thematic closure for presening chapters. Writing a book of this sort presents certain challenges that the authors have conspired to tackle through organization as well as content. One of these challenges is presenting vital and pervasive research issues. These are nicely bounded by the structure of the book, with philosophical, social, and ethical considerations introduced in Section 1, revisited in middle chapters, and reinforced in the postscript. A second challenge is untangling the complexities of interrelated research methods. Here the strategy of distinguishing between research methods and research techniques is carefully explained, but admittedly strained. In separate chapters, for instance, survey is presented as method, and questionnaire (commonly called survey) as technique; ethnography as method and as multiple techniques; Delphi as method when it is also technique; and focus group as technique when it is also method. A third challenge is deciding where to stop in a book of medium length. The introduction states that bibliometrics and content analysis are omitted, although Chapter 17 an data analysis does cover some content analytic techniques under the heading of qualitative analysis. And while software packages for analyzing quantitative and qualitative data are mentioned, computer-biaed techniques for data collection, such as transaction logs, are not. Generally, the authors favor discussion of more obtrusive approaches to data collection (excepting historical) and their concomitant issues of human interaction.
  14. Developments in applied artificial intelligence : proceedings / 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, Loughborough, UK, June 23 - 26, 2003 (2003) 0.01
    0.0131154945 = product of:
      0.06557747 = sum of:
        0.014290508 = weight(_text_:23 in 441) [ClassicSimilarity], result of:
          0.014290508 = score(doc=441,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
        0.014290508 = weight(_text_:23 in 441) [ClassicSimilarity], result of:
          0.014290508 = score(doc=441,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
        0.014290508 = weight(_text_:23 in 441) [ClassicSimilarity], result of:
          0.014290508 = score(doc=441,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=441)
        0.022705944 = product of:
          0.04541189 = sum of:
            0.04541189 = weight(_text_:engineering in 441) [ClassicSimilarity], result of:
              0.04541189 = score(doc=441,freq=4.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.41972876 = fieldWeight in 441, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=441)
          0.5 = coord(1/2)
      0.2 = coord(4/20)
    
    Abstract
    This book constitutes the refereed proceedings of the 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, held in Loughborough, UK in June 2003. The 81 revised full papers presented were carefully reviewed and selected from more than 140 submissions. Among the topics addressed are soft computing, fuzzy logic, diagnosis, knowledge representation, knowledge management, automated reasoning, machine learning, planning and scheduling, evolutionary computation, computer vision, agent systems, algorithmic learning, tutoring systems, financial analysis, etc.
  15. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.01
    0.011176803 = product of:
      0.055884015 = sum of:
        0.017332766 = weight(_text_:software in 5089) [ClassicSimilarity], result of:
          0.017332766 = score(doc=5089,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21695362 = fieldWeight in 5089, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.017332766 = weight(_text_:software in 5089) [ClassicSimilarity], result of:
          0.017332766 = score(doc=5089,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21695362 = fieldWeight in 5089, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.0038857143 = weight(_text_:der in 5089) [ClassicSimilarity], result of:
          0.0038857143 = score(doc=5089,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.08637954 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.017332766 = weight(_text_:software in 5089) [ClassicSimilarity], result of:
          0.017332766 = score(doc=5089,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21695362 = fieldWeight in 5089, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.2 = coord(4/20)
    
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
    Series
    Berichte aus der Informatik
  16. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.01
    0.011119913 = product of:
      0.044479653 = sum of:
        0.010505242 = weight(_text_:software in 4401) [ClassicSimilarity], result of:
          0.010505242 = score(doc=4401,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.13149375 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.010505242 = weight(_text_:software in 4401) [ClassicSimilarity], result of:
          0.010505242 = score(doc=4401,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.13149375 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.003330612 = weight(_text_:der in 4401) [ClassicSimilarity], result of:
          0.003330612 = score(doc=4401,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.07403961 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.010505242 = weight(_text_:software in 4401) [ClassicSimilarity], result of:
          0.010505242 = score(doc=4401,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.13149375 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.009633316 = product of:
          0.019266631 = sum of:
            0.019266631 = weight(_text_:engineering in 4401) [ClassicSimilarity], result of:
              0.019266631 = score(doc=4401,freq=2.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.17807582 = fieldWeight in 4401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4401)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
    Content
    Inhalt: OIL and DAML + OIL: Ontology Languages for the Semantic Web (pages 11-31) / Dieter Fensel, Frank van Harmelen and Ian Horrocks A Methodology for Ontology-Based Knowledge Management (pages 33-46) / York Sure and Rudi Studer Ontology Management: Storing, Aligning and Maintaining Ontologies (pages 47-69) / Michel Klein, Ying Ding, Dieter Fensel and Borys Omelayenko Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema (pages 71-89) / Jeen Broekstra, Arjohn Kampman and Frank van Harmelen Generating Ontologies for the Semantic Web: OntoBuilder (pages 91-115) / R. H. P. Engels and T. Ch. Lech OntoEdit: Collaborative Engineering of Ontologies (pages 117-132) / York Sure, Michael Erdmann and Rudi Studer QuizRDF: Search Technology for the Semantic Web (pages 133-144) / John Davies, Richard Weeks and Uwe Krohn Spectacle (pages 145-159) / Christiaan Fluit, Herko ter Horst, Jos van der Meer, Marta Sabou and Peter Mika OntoShare: Evolving Ontologies in a Knowledge Sharing System (pages 161-177) / John Davies, Alistair Duke and Audrius Stonkus Ontology Middleware and Reasoning (pages 179-196) / Atanas Kiryakov, Kiril Simov and Damyan Ognyanov Ontology-Based Knowledge Management at Work: The Swiss Life Case Studies (pages 197-218) / Ulrich Reimer, Peter Brockhausen, Thorsten Lau and Jacqueline R. Reich Field Experimenting with Semantic Web Tools in a Virtual Organization (pages 219-244) / Victor Iosif, Peter Mika, Rikard Larsson and Hans Akkermans A Future Perspective: Exploiting Peer-To-Peer and the Semantic Web for Knowledge Management (pages 245-264) / Dieter Fensel, Steffen Staab, Rudi Studer, Frank van Harmelen and John Davies Conclusions: Ontology-driven Knowledge Management - Towards the Semantic Web? (pages 265-266) / John Davies, Dieter Fensel and Frank van Harmelen
  17. Computational information retrieval (2001) 0.01
    0.009454718 = product of:
      0.06303145 = sum of:
        0.021010485 = weight(_text_:software in 4167) [ClassicSimilarity], result of:
          0.021010485 = score(doc=4167,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
        0.021010485 = weight(_text_:software in 4167) [ClassicSimilarity], result of:
          0.021010485 = score(doc=4167,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
        0.021010485 = weight(_text_:software in 4167) [ClassicSimilarity], result of:
          0.021010485 = score(doc=4167,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
      0.15 = coord(3/20)
    
    Abstract
    This volume contains selected papers that focus on the use of linear algebra, computational statistics, and computer science in the development of algorithms and software systems for text retrieval. Experts in information modeling and retrieval share their perspectives on the design of scalable but precise text retrieval systems, revealing many of the challenges and obstacles that mathematical and statistical models must overcome to be viable for automated text processing. This very useful proceedings is an excellent companion for courses in information retrieval, applied linear algebra, and applied statistics. Computational Information Retrieval provides background material on vector space models for text retrieval that applied mathematicians, statisticians, and computer scientists may not be familiar with. For graduate students in these areas, several research questions in information modeling are exposed. In addition, several case studies concerning the efficacy of the popular Latent Semantic Analysis (or Indexing) approach are provided.
  18. Dynamism and stability in knowledge organization : Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada (2000) 0.01
    0.009266594 = product of:
      0.037066378 = sum of:
        0.008754368 = weight(_text_:software in 5892) [ClassicSimilarity], result of:
          0.008754368 = score(doc=5892,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.10957812 = fieldWeight in 5892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5892)
        0.008754368 = weight(_text_:software in 5892) [ClassicSimilarity], result of:
          0.008754368 = score(doc=5892,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.10957812 = fieldWeight in 5892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5892)
        0.00277551 = weight(_text_:der in 5892) [ClassicSimilarity], result of:
          0.00277551 = score(doc=5892,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.061699674 = fieldWeight in 5892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5892)
        0.008754368 = weight(_text_:software in 5892) [ClassicSimilarity], result of:
          0.008754368 = score(doc=5892,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.10957812 = fieldWeight in 5892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5892)
        0.008027764 = product of:
          0.016055528 = sum of:
            0.016055528 = weight(_text_:engineering in 5892) [ClassicSimilarity], result of:
              0.016055528 = score(doc=5892,freq=2.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.14839652 = fieldWeight in 5892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5892)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Content
    BEAN, C.A.: Mapping down: semantic and structural relationships in user-designated broader-narrow term pairs. DE MOYA-ANEGÓN, F., M.J. LÓPEZ-HUERTAS: An automatic model for updating the conceptual structure of a scientific discipline. BARTOLO, L.M., A.M. TRIMBLE: Heterogeneous structures project database: vocabulary mapping within a multidisciplinary, multiinstitutional research group. FRÂNCU, V.: Harmonizing a universal classification system with an interdisciplinary multilingual thesaurus: advantages and limitations. PRISS, U.: Comparing classification systems using facets. WILLIAMSON, N.J.: Thesauri in the digital age: stability and dynamism in their development and use. SIGEL, A.: How can user-oriented depth analysis be constructively guided?. SAGGION, H., G. LAPALME: Selective analysis for the automatic generation of summaries. POLLITT, A.S., A.J. TINKER: Enhanced view-based searching through the decomposition of Dewey Decimal Classification codes. RADEMAKER, C.A.: The classification of ornamental designs in the United States Patent Classification System. HUBER, J.T., M.L. GILLASPY: An examination of the discourse of homosexuality as reflected in medical vocabularies, classificatory structures, and information resources. HE, Q.: A study of the strength indexes in co-word analysis. GREEN, R.: Automated identification of frame semantic relational structures. MCILWAINE, I.C.: Interdisciplinarity: a new retrieval problem?. DAVENPORT, E., H. ROSENBAUM: A system for organizing situational knowledge in the workplace that is based on the shape of documents. HOWARTH, L.C.: Designing a "Human Understandable" metalevel ontology for enhancing resource discovery in knowledge bases. IHADJADENE, M., R. BOUCHÉ u. R. ZÂAFRANI: The dynamic nature of searching and browsing on Web-OPACs: the CATHIE experience. DING, Y., G. CHOWDHURY u. S. FOO: Organsising keywords in a Web search environment: a methodology based on co-word analysis. HUDON, M.: Innovation and tradition in knowledge organization schemes on the Internet, or, Finding one's way in the virtual library. CLARKE, S.G.D.: Thesauri, topics and other structures in knowledge management software. DEVADASON, F.J., P. PATAMAWONGJARIYA: FAHOO: faceted alphabetico-hierachically organized objects systems. KWASNIK, B.H., X. LIU: Classification structures in the changing environment of active commercial websites: the case of eBay.com.
    ARDÖ, A., J. GODBY u. A. HOUGHTON u.a.: Browsing engineering resources on the Web: a general knowledge organization scheme (Dewey) vs. a special scheme (EI). DRON, J., C. BOYNE u. R. MITCHELL u.a.: Darwin among the indices: a report on COFIND, a self-organising resource base. VAN DER WALT, M.: South African search engines, directories and portals: a survey and evaluation. GARCIA, L.S., S.M.M. OLIVEIRA u. G.M.S. LUZ: Knowledge organization for query elaboration and support for technical response by the Internet. OHLY, H.P.: Information and organizational knowledge faced with contemporary knowledge theories: unveiling the strength of the myth. POLANCO, X., C. FRANCOIS: Data clustering and cluster mapping or visualization in text processing and mining. BOWKER, L.: A corpus-based investigation of variation in the organization of medical terms. CRAIG, B.L.: Rethinking official knowing and its practices: the British Treasury's Registry between the Two World Wars. BUCKLAND, M.K., A. CHEN u. M. GEBBIE u.a.: Variation by subdomain in indexes to knowledge organization systems. HUDON, M., J.M. TURNER u. Y. DEVIN: How many terms are enough?: stability and dynamism in vocabulary management for moving image collections. ARSENAULT, C.: Testing the impact of syllable aggregation in romanized fields of Chinese language bibliographic records. HE, S.: Conceptual equivalence and representational difference in terminology translation of English computer terms in simplified Chinese and traditional Chinese. SMIRAGLIA, R.P.: Works as signs and canons: towards an epistemology of the work. CARLYLE, A., J. SUMMERLIN: Transforming catalog displays: records clustering for works of fiction. HILDRETH, C.R.: Are Web-based OPACs more effective retrieval systems than their conventional predecessors?: an experimental study. RIESTHUIS, G.J.A.: Multilingual subject access and the Guidelines for the establishment and development of multilingual thesauri: an experimental study.
  19. Saving the time of the library user through subject access innovation : Papers in honor of Pauline Atherton Cochrane (2000) 0.01
    0.009124769 = product of:
      0.045623843 = sum of:
        0.01414688 = weight(_text_:23 in 1429) [ClassicSimilarity], result of:
          0.01414688 = score(doc=1429,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.19600326 = fieldWeight in 1429, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1429)
        0.01414688 = weight(_text_:23 in 1429) [ClassicSimilarity], result of:
          0.01414688 = score(doc=1429,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.19600326 = fieldWeight in 1429, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1429)
        0.01414688 = weight(_text_:23 in 1429) [ClassicSimilarity], result of:
          0.01414688 = score(doc=1429,freq=4.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.19600326 = fieldWeight in 1429, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1429)
        0.0031832005 = product of:
          0.009549601 = sum of:
            0.009549601 = weight(_text_:22 in 1429) [ClassicSimilarity], result of:
              0.009549601 = score(doc=1429,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.1354154 = fieldWeight in 1429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1429)
          0.33333334 = coord(1/3)
      0.2 = coord(4/20)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in: KO 28(2001) no.2, S.97-100 (S. Betrand-Gastaldy); Information processing and management 37(2001) no.5, S.766-767 (H. Borko); JASIST 23(2002) no.1, S.58-60 (A.T.D. Petrou); Library and information science research 23(2001) S.200-202 (D.J. Karpuk)
  20. British librarianship and information work : 2001-2005 (2007) 0.01
    0.009003021 = product of:
      0.060020134 = sum of:
        0.02000671 = weight(_text_:23 in 5952) [ClassicSimilarity], result of:
          0.02000671 = score(doc=5952,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 5952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5952)
        0.02000671 = weight(_text_:23 in 5952) [ClassicSimilarity], result of:
          0.02000671 = score(doc=5952,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 5952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5952)
        0.02000671 = weight(_text_:23 in 5952) [ClassicSimilarity], result of:
          0.02000671 = score(doc=5952,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 5952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5952)
      0.15 = coord(3/20)
    
    Date
    31.12.2008 17:23:54

Languages

Types

  • m 94
  • el 1
  • More… Less…

Subjects

Classifications