Search (196 results, page 1 of 10)

  • × type_ss:"r"
  1. Bredemeier, W.; Stock, M.; Stock, W.G.: ¬Die Branche elektronischer Geschäftsinformationen in Deutschland 2000/2001 (2001) 0.03
    0.030994385 = product of:
      0.10848034 = sum of:
        0.032137483 = weight(_text_:wide in 621) [ClassicSimilarity], result of:
          0.032137483 = score(doc=621,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
        0.017435152 = weight(_text_:web in 621) [ClassicSimilarity], result of:
          0.017435152 = score(doc=621,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
        0.05177324 = weight(_text_:elektronische in 621) [ClassicSimilarity], result of:
          0.05177324 = score(doc=621,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3694514 = fieldWeight in 621, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
        0.0071344664 = weight(_text_:information in 621) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=621,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 621, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
      0.2857143 = coord(4/14)
    
    Content
    Der deutsche Markt für Elektronische Informationsdienste im Jahre 2000 - Ergebnisse einer Umsatzerhebung - Von Willi Bredemeier: - Abgesicherte Methodologie unter Berücksichtigung der Spezifika des EIS-Marktes und der aktuellen Entwicklung - teilweise Vergleichbarkeit der Daten ab 1989 - Weitgehende quantitative Markttransparenz, da der Leser die Aggregationen der Markt- und Teilmarktdaten aus einzelwirtschaftlichen Daten voll nachvollziehen kann - 93 zum Teil ausführliche Tabellen vorwiegend zu einzelnen Informationsanbietern unter besonderer Berücksichtigung der Geschäftsjahre 2000 und 1999, unterteilt in die Bereiche Gesamtmarkt für Elektronische Informationsdienste, Datev, Realtime-Finanzinformationen, Nachrichtenagenturen, Kreditinformationen, Firmen- und Produktinformationen, weitere Wirtschaftsinformationen, Rechtsinformationen, Wissenschaftlich-technisch-medizinische Informationen - Intellectual Property, Konsumentendienste, Nachbarmärkte - Analyse aktueller Markttrends. Qualität professioneller Firmeninformationen im World Wide Web - Von Mechtild Stock und Wolfgang G. Stock: - Weiterführung der Qualitätsdiskussion und Entwicklung eines Systems von Qualitätskriterien für Informationsangebote, bezogen auf Firmeninformationen im Internet - "Qualitätspanel" für die Bereiche Bonitätsinformationen, Firmenkurzdossiers, Produktinformationen und Adressinformationen mit den Anbietern Bürgel, Creditreform, Dun & Bradstreet Deutschland, ABC online, ALLECO, Hoppenstedt Firmendatenbank, Who is Who in Multimedia, Kompass Deutschland, Sachon Industriedaten, Wer liefert was?, AZ Bertelsmann, Schober.com - Hochdifferenzierte Tests, die den Kunden Hilfen bei der Auswahl zwischen Angeboten und den Anbietern Hinweise auf Maßnahmen zu qualitativen Verbesserungen geben - Detaillierte Informationen über eingesetzte Systeme der Branchen- und Produktklassifikationen - Rankings der Firmeninformationsanbieter insgesamt sowie nach Datenbasen, Retrievalsystemen und Websites, Detailinformationen zu allen Qualitätsdimensionen
    Imprint
    Hattingen : Institute for Information Economics
    Theme
    Information Resources Management
  2. Studer, R.; Studer, H.-P.; Studer, A.: Semantisches Knowledge Retrieval (2001) 0.02
    0.01979462 = product of:
      0.09237489 = sum of:
        0.036238287 = weight(_text_:web in 4322) [ClassicSimilarity], result of:
          0.036238287 = score(doc=4322,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 4322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4322)
        0.012107591 = weight(_text_:information in 4322) [ClassicSimilarity], result of:
          0.012107591 = score(doc=4322,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 4322, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4322)
        0.044029012 = weight(_text_:retrieval in 4322) [ClassicSimilarity], result of:
          0.044029012 = score(doc=4322,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.49118498 = fieldWeight in 4322, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4322)
      0.21428572 = coord(3/14)
    
    Abstract
    Dieses Whitepaper befasst sich mit der Integration semantischer Technologien in bestehende Ansätze des Information Retrieval und die damit verbundenen weitreichenden Auswirkungen auf Effizienz und Effektivität von Suche und Navigation in Dokumenten. Nach einer Einbettung in die Problematik des Wissensmanagement aus Sicht der Informationstechnik folgt ein Überblick zu den Methoden des Information Retrieval. Anschließend werden die semantischen Technologien "Wissen modellieren - Ontologie" und "Neues Wissen ableiten - Inferenz" vorgestellt. Ein Integrationsansatz wird im Folgenden diskutiert und die entstehenden Mehrwerte präsentiert. Insbesondere ergeben sich Erweiterungen hinsichtlich einer verfeinerten Suchunterstützung und einer kontextbezogenen Navigation sowie die Möglichkeiten der Auswertung von regelbasierten Zusammenhängen und einfache Integration von strukturierten Informationsquellen. Das Whitepaper schließt mit einem Ausblick auf die zukünftige Entwicklung des WWW hin zu einem Semantic Web und die damit verbundenen Implikationen für semantische Technologien.
    Content
    Inhalt: 1. Einführung - 2. Wissensmanagement - 3. Information Retrieval - 3.1. Methoden und Techniken - 3.2. Information Retrieval in der Anwendung - 4. Semantische Ansätze - 4.1. Wissen modellieren - Ontologie - 4.2. Neues Wissen inferieren - 5. Knowledge Retrieval in der Anwendung - 6. Zukunftsaussichten - 7. Fazit
    Series
    Ontoprise "Semantics for the Web" - Whitepaper series
    Theme
    Semantic Web
  3. Loth, K.; Grunewald, F.: Ideen zu einer gemeinsamen Sacherschliessung (1996) 0.02
    0.01946134 = product of:
      0.13622937 = sum of:
        0.08829665 = weight(_text_:bibliothek in 3647) [ClassicSimilarity], result of:
          0.08829665 = score(doc=3647,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.72576207 = fieldWeight in 3647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.125 = fieldNorm(doc=3647)
        0.047932718 = weight(_text_:retrieval in 3647) [ClassicSimilarity], result of:
          0.047932718 = score(doc=3647,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 3647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=3647)
      0.14285715 = coord(2/14)
    
    Imprint
    Zürich : ETH-Bibliothek
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Montasser-Kohsari, G.; Kirstein, P.; Goudal, P.: Online access to multimedia documents : second phase (1995) 0.02
    0.017616795 = product of:
      0.08221171 = sum of:
        0.051252894 = weight(_text_:elektronische in 2428) [ClassicSimilarity], result of:
          0.051252894 = score(doc=2428,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.36573824 = fieldWeight in 2428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2428)
        0.009988253 = weight(_text_:information in 2428) [ClassicSimilarity], result of:
          0.009988253 = score(doc=2428,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 2428, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2428)
        0.020970564 = weight(_text_:retrieval in 2428) [ClassicSimilarity], result of:
          0.020970564 = score(doc=2428,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 2428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2428)
      0.21428572 = coord(3/14)
    
    Abstract
    Final report of a British Library supported conducted at University College, London, computer science department, the aim of which was to build a demonstration and test bed facility for online access to a large electronic library of multimedia documents. The project was a pilot experiment in the use of a database of compound documents (text and images) in the Open Document Architecture format. The database used is part of the contents of information in the Journal of the American Chemical Society. Discusses the overall view of the project with particular reference to the WAIS information retrieval server which was developed and used
    Form
    Elektronische Dokumente
  5. McCormick, A.; Sutton, A.: Open learning and the Internet in public libraries (1998) 0.02
    0.017613377 = product of:
      0.06164682 = sum of:
        0.029588435 = weight(_text_:web in 3685) [ClassicSimilarity], result of:
          0.029588435 = score(doc=3685,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3059541 = fieldWeight in 3685, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3685)
        0.0060537956 = weight(_text_:information in 3685) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=3685,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 3685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3685)
        0.01797477 = weight(_text_:retrieval in 3685) [ClassicSimilarity], result of:
          0.01797477 = score(doc=3685,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 3685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3685)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 3685) [ClassicSimilarity], result of:
              0.024089456 = score(doc=3685,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 3685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3685)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Presents the findings of the South Ayrshire Libraries OPen Learning and the Internet project, Sep 1997 to Oct 1998. The objective was to demonstrate how open learning materials available on the Internet could be integrated with the provision of local open learning resources to provide an enhanced learning environment in public libraries. The main areas of concentration within the project were information skills support to public library users and the provision of WWW based independent materials to learners. The organisation and retrieval of Web based resources for local use was a major issue throughout the project. Recommends the adoption of Dublin Core metadata standards, the connection of databases of resources with searchable wen pages, and the development of thesauri of terms used to index the Web based resources locally. Sstaff training, and the new skills which will need to be developed, were identified as issues. Cost was also identified as a related issue, extending to issues such as access to open learning material and the Internet
    Date
    22. 5.1999 18:55:19
  6. Horch, A.; Kett, H.; Weisbecker, A.: Semantische Suchsysteme für das Internet : Architekturen und Komponenten semantischer Suchmaschinen (2013) 0.02
    0.016811565 = product of:
      0.078453965 = sum of:
        0.034870304 = weight(_text_:web in 4063) [ClassicSimilarity], result of:
          0.034870304 = score(doc=4063,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.36057037 = fieldWeight in 4063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.010089659 = weight(_text_:information in 4063) [ClassicSimilarity], result of:
          0.010089659 = score(doc=4063,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 4063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.033494003 = weight(_text_:retrieval in 4063) [ClassicSimilarity], result of:
          0.033494003 = score(doc=4063,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37365708 = fieldWeight in 4063, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
      0.21428572 = coord(3/14)
    
    Abstract
    In der heutigen Zeit nimmt die Flut an Informationen exponentiell zu. In dieser »Informationsexplosion« entsteht täglich eine unüberschaubare Menge an neuen Informationen im Web: Beispielsweise 430 deutschsprachige Artikel bei Wikipedia, 2,4 Mio. Tweets bei Twitter und 12,2 Mio. Kommentare bei Facebook. Während in Deutschland vor einigen Jahren noch Google als nahezu einzige Suchmaschine beim Zugriff auf Informationen im Web genutzt wurde, nehmen heute die u.a. in Social Media veröffentlichten Meinungen und damit die Vorauswahl sowie Bewertung von Informationen einzelner Experten und Meinungsführer an Bedeutung zu. Aber wie können themenspezifische Informationen nun effizient für konkrete Fragestellungen identifiziert und bedarfsgerecht aufbereitet und visualisiert werden? Diese Studie gibt einen Überblick über semantische Standards und Formate, die Prozesse der semantischen Suche, Methoden und Techniken semantischer Suchsysteme, Komponenten zur Entwicklung semantischer Suchmaschinen sowie den Aufbau bestehender Anwendungen. Die Studie erläutert den prinzipiellen Aufbau semantischer Suchsysteme und stellt Methoden der semantischen Suche vor. Zudem werden Softwarewerkzeuge vorgestellt, mithilfe derer einzelne Funktionalitäten von semantischen Suchmaschinen realisiert werden können. Abschließend erfolgt die Betrachtung bestehender semantischer Suchmaschinen zur Veranschaulichung der Unterschiede der Systeme im Aufbau sowie in der Funktionalität.
    RSWK
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
    Subject
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  7. Nicholson, D.: Cataloguing the Internet : CATRIONA feasibility study (1995) 0.02
    0.016695552 = product of:
      0.07791257 = sum of:
        0.043931052 = weight(_text_:elektronische in 6296) [ClassicSimilarity], result of:
          0.043931052 = score(doc=6296,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3134899 = fieldWeight in 6296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.046875 = fieldNorm(doc=6296)
        0.00856136 = weight(_text_:information in 6296) [ClassicSimilarity], result of:
          0.00856136 = score(doc=6296,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 6296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6296)
        0.025420163 = weight(_text_:retrieval in 6296) [ClassicSimilarity], result of:
          0.025420163 = score(doc=6296,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 6296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=6296)
      0.21428572 = coord(3/14)
    
    Abstract
    The aim of the CATRIONA (Cataloguing and Retrieval of Information over Networks Applications) feasibility study was to investigate the technical, organizational and financial requirements for the development of applications software and procedures to enable the cataloguing, calssification and retrieval of documents and other resources over networks such as the Internet. The CATRIONA feasibility study demonstrated that the idea of a distributed catalogue of Internet resources integrated with standard Z39.50 library system OPAC interfaces is already a practical proposition at its most basic level. Proposes that the next step should be a distributed CATRIONA demonstrator project, based on the Scottish University and Research Libraries (SCURL) group of libraries cooperating to catalogue local electronic resources and selected areas of BUBL Subject Trees, but also sufficiently 'open' to encompass other sites, projects and approaches
    Form
    Elektronische Dokumente
    Series
    British Library library and information research report; 105
  8. Huemer, H.: Semantische Technologien : Analyse zum Status quo, Potentiale und Ziele im Bibliotheks-, Informations- und Dokumentationswesen (2006) 0.02
    0.016380724 = product of:
      0.076443374 = sum of:
        0.03376303 = weight(_text_:web in 641) [ClassicSimilarity], result of:
          0.03376303 = score(doc=641,freq=30.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.34912077 = fieldWeight in 641, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=641)
        0.036501717 = weight(_text_:bibliothek in 641) [ClassicSimilarity], result of:
          0.036501717 = score(doc=641,freq=14.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.30002907 = fieldWeight in 641, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.01953125 = fieldNorm(doc=641)
        0.006178629 = weight(_text_:information in 641) [ClassicSimilarity], result of:
          0.006178629 = score(doc=641,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.11877254 = fieldWeight in 641, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=641)
      0.21428572 = coord(3/14)
    
    Abstract
    Das vorliegende Werk ist der erste Band in der Reihe "Branchenreports" der Semantic Web School. Diese Reihe, die in Zusammenarbeit mit Branchenexperten entwickelt wurde, verfolgt das Ziel, in regelmäßigen Abständen die Bedeutung semantischer Technologien in ausgewählten Branchen und Communities zu analysieren und zu durchleuchten. Damit sollen dem interessierten Leser in erster Linie ein Überblick und Einstiegspunkte geboten werden: Die Branchenreports helfen, sich in einem emergenten Umfeld besser orientieren zu können, sie zeigen Entwicklungspfade an, entlang welcher sich Branchen bewegen, die vermehrt auf den Einsatz semantischer Informationstechnologien setzen. Dieser Branchenreport beschäftigt sich mit dem Bibliotheks-, Informationsund Dokumentationswesen (BID) und es ist kein Zufall, dass diese Branche als erste durchleuchtet wird, sind doch hier die Wurzeln der professionellen Wissensorganisation zu finden. Nun, im Zeitalter der Digitalisierung und des Internets, steht diese Community vor neuen, großen Herausforderungen und Möglichkeiten. Gerade im Umfeld des Semantic Web zeigt sich, dass die Erfahrungen aus dem BID-Bereich einen wichtigen Beitrag leisten können, soll die Entwicklung des Internets der nächsten Generation nicht nur von der Technik geprägt werden. Dieser Band möchte die Neugierde all jener wecken, die sich vor neuen Technologien nicht verschließen, und darauf aufmerksam machen, dass die Möglichkeiten, Information und Wissen zu organisieren, im 21. Jahrhundert gänzlich neue sein werden.
    Content
    Inhaltsverzeichnis 1. Einleitung 2. Bibliothekspolitik 3. Begriffsdefinitionen 3.1. Bibliothek - 3.2. Archiv - 3.3. Museum - 3.4. Information und Dokumentation - 3.5. Information - 3.6. Semantik und semantische Technologien - 3.7. Ontologie - 3.8. Recall und Precision 4. Bibliotheken aus statistischer Sicht - Kennzahlen 5. Bibliographische Tools 5.1. Austauschformate 5.1.1. MAB / MAB2 - 5.1.2. Allegro-C - 5.1.3. MARC 2 - 5.1.4. Z39.50 - 5.1.5. Weitere Formate 5.2. Kataloge / OPACs 5.2.1. Aleph 500 - 5.2.2. Allegro-C - 5.2.3. WorldCat beta 5.3. Dokumentationssysteme 5.4. Suchmaschinen 5.4.1. Convera und ProTerm - 5.4.2. APA Online Manager - 5.4.3. Google Scholar - 5.4.4. Scirus - 5.4.5. OAIster - 5.4.6. GRACE 5.5. Informationsportale 5.5.1. iPort - 5.5.2. MetaLib - 5.5.3. Vascoda - 5.5.4. Dandelon - 5.5.5. BAM-Portal - 5.5.6. Prometheus 6. Semantische Anreicherung 6.1. Indexierung - 6.2. Klassifikation - 6.3. Thesauri 38 - 6.4. Social Tagging 7. Projekte 7.1. Bibster - 7.2. Open Archives Initiative OAI - 7.3. Renardus - 7.4. Perseus Digital Library - 7.5. JeromeDL - eLibrary with Semantics 8. Semantische Technologien in BAM-InstitutionenÖsterreichs 8.1. Verbundkatalog des Österreichischen Bibliothekenverbunds - 8.2. Bibliotheken Online - WebOPAC der Öffentlichen Bibliotheken - 8.3. Umfrage-Design - 8.4. Auswertung 9. Fazit und Ausblick 10. Quellenverzeichnis 11. Web-Links 12. Anhang Vgl.: http://www.semantic-web.at/file_upload/1_tmpphp154oO0.pdf.
    Footnote
    Rez. in: Mitt VÖB 60(2007)H.3, S.80-81 (J. Bertram): "Wie aus dem Titel der Publikation hervorgeht, will der Autor eine Bestandsaufnahme zum Einsatz semantischer Technologien im BID-Bereich (Bibliothek - Information - Dokumentation) bzw. BAM-Bereich (Bibliothek - Archiv - Museum) vornehmen. einigem Befremden, dass eines von insgesamt drei Vorwörtern für ein einschlägiges Softwareprodukt wirbt und von einer Firmenmitarbeiterin verfasst worden ist. Nach einer Skizze des gegenwärtigen Standes nationaler und europäischer Bibliothekspolitik folgen kurze Definitionen zu den beteiligten Branchen, zu semantischen Technologien und zu Precision und Recall. Die Ausführungen zu semantischen Technologien hätten durchaus gleich an den Anfang gestellt werden können, schließlich sollen sie ja das Hauptthema der vorliegenden Publikation abgeben. Zudem hätten sie konkreter, trennschärfer und ausführlicher ausfallen können. Der Autor moniert zu Recht das Fehlen einer einheitlichen Auffassung, was unter semantischen Technologien denn nun genau zu verstehen sei. Seine Definition lässt allerdings Fragen offen. So wird z.B. nicht geklärt, was besagte Technologien von den hier auch immer wieder erwähnten semantischen Tools unterscheidet. Das nachfolgende Kapitel über bibliographische Tools vereint eine Aufzählung konkreter Beispiele für Austauschformate, Dokumentationssysteme, Suchmaschinen, Informationsportale und OPACs. Im Anschluss daran stellt der Autor Methoden semantischer Anreicherung (bibliographischer) Daten vor und präsentiert Projekte im Bibliotheksbereich. Der aufzählende Charakter dieses und des vorangestellten Kapitels mag einem schnellen Überblick über die fraglichen Gegenstände dienlich sein, für eine systematische Lektüre eignen sich diese Passagen weniger. Auch wird der Bezug zu semantischen Technologien nicht durchgängig hergestellt.
    Danach kommt das Werk - leider nur auf acht Seiten - zu seinem thematischen Kern. Die Frage, ob, in welchem Ausmaß und welche semantischen Technologien im BID-%BAM-Bereich eingesetzt werden, sollte eigentlich mit einer schriftlichen Befragung einschlägiger Institutionen verfolgt werden. Jedoch konnte dieses Ziel wegen des geringen Rücklaufs nur sehr eingeschränkt erreicht werden: im ersten Versuch antworteten sechs Personen aus insgesamt 65 angeschriebenen Institutionen. Beim zweiten Versuch mit einem deutlich abgespeckten Fragekatalog kamen weitere fünf Antworten dazu. Ausschlaggebend für die geringe Resonanz dürfte eine Mischung aus methodischen und inhaltlichen Faktoren gewesen sein: Eine schriftliche Befragung mit vorwiegend offenen Fragen durchzuführen, ist ohnehin schon ein Wagnis. Wenn diese Fragen dann auch noch gleichermaßen komplex wie voraussetzungsvoll sind, dann ist ein unbefriedigender Rücklauf keine Überraschung. Nicht zuletzt mag sich hier die Mutmaßung des Autors aus seinem Vorwort bewahrheiten und ihm zugleich zum Verhängnis geworden sein, nämlich dass "der Begriff 'Semantik' vielen Bibliothekaren und Dokumentaren noch nicht geläufig (ist)" - wie sollen sie dann aber Fragen dazu beantworten? Beispielhaft sei dafür die folgende angeführt: "Welche Erwartungen, Perspektiven, Prognosen, Potentiale, Paradigmen verbinden Sie persönlich mit dem Thema ,Semantische Technologien'?" Am Ende liegt der Wert der Untersuchung sicher vor allem darin, eine grundlegende Annahme über den Status quo in der fraglichen Branche zu bestätigen: dass semantische Technologien dort heute noch eine geringe Rolle und künftig schon eine viel größere spielen werden. Insgesamt gewinnt man den Eindruck, dass hier zum Hauptgegenstand geworden ist, was eigentlich nur Rahmen sein sollte. Die Publikation (auch der Anhang) wirkt streckenweise etwas mosaiksteinartig, das eigentlich Interessierende kommt zu kurz. Gleichwohl besteht ihr Verdienst darin, eine Annäherung an ein Thema zu geben, das in den fraglichen Institutionen noch nicht sehr bekannt ist. Auf diese Weise mag sie dazu beitragen, semantische Technologien im Bewusstsein der beteiligten Akteure stärker zu verankern. Das hier besprochene Werk ist der erste Band einer Publikationsreihe der Semantic Web School zum Einsatz semantischer Technologien in unterschiedlichen Branchen. Den nachfolgenden Bänden ist zu wünschen, dass sie sich auf empirische Untersuchungen mit größerer Resonanz stützen können."
    Imprint
    Wien : Eigenverlag der Semantic Web School
    RSWK
    Information und Dokumentation / Semantic Web (GBV)
    Bibliothek / Semantic Web (GBV)
    Bibliothek / Automation / Semantic Web (GBV)
    Semantic Web (SWB)
    Series
    Reihe Branchenreport der Semantic Web School
    Subject
    Information und Dokumentation / Semantic Web (GBV)
    Bibliothek / Semantic Web (GBV)
    Bibliothek / Automation / Semantic Web (GBV)
    Semantic Web (SWB)
    Theme
    Semantic Web
  9. Buchbinder, R.; Weidemüller, H.U.; Tiedemann, E.: Biblio-Data, die nationalbibliographische Datenbank der Deutschen Bibliothek (1979) 0.02
    0.015861485 = product of:
      0.07402027 = sum of:
        0.047791965 = weight(_text_:bibliothek in 4) [ClassicSimilarity], result of:
          0.047791965 = score(doc=4,freq=6.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.39283025 = fieldWeight in 4, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4)
        0.0050448296 = weight(_text_:information in 4) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4)
        0.021183468 = weight(_text_:retrieval in 4) [ClassicSimilarity], result of:
          0.021183468 = score(doc=4,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 4, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4)
      0.21428572 = coord(3/14)
    
    Abstract
    Die deutschen nationalbibliographische Datenbank Biblio-Data wird in Teil A mit ihren Grundlagen vorgestellt. Biblio-Data basiert auf dem Information-Retrieval-System STAIRS von IBM, das durch Zusatzprogramme der ZMD verbessert wurde. Das Hauptgewicht dieses Beitrags liegt auf der Erörterung des Biblio-Data-Konzepts, nach dem die Daten der Deutschen Bibliographie retrievalgerecht aufbereitet und eingegeben werden. Auf Eigenarten, Probleme und Mängel, die daraus entstehen, dass Biblio-Data auf nationalbibliographischen Daten aufbaut, wird ausführlich eingegangen. Zwei weitere Beiträge zeigen an einigen Beispielen die vielfältigen Suchmöglichkeiten mit Biblio-Data, um zu verdeutlichen, dass damit nicht nur viel schenellere, sondern auch bessere Recherchen ausgeführt werden können. Teil B weist nach, dass Biblio-Data weder die Innhaltsanalyse noch die Zuordnung von schlagwörtern automatisieren kann. Im derzeitigen Einsatzstadium fällt Biblio-Data die Aufgabe zu, die weiterhin knventionall-intellektuell auszuführende Sacherschließung durch retrospektive Recherchen, insbesonderr durch den wesentlich erleichterten Zugriff auf frühere Indexierungsergebnisse, zu unterstützen. Teil C schildert die praktische Arbeit mit Biblio-Data bei bibliographischen Ermittlungen und Literaturzusammenstellunen und kommt zu dem Ergebnis, dass effektive und bibliographische Arbeit aus der sinnvollen Kombination von Datenbank-Retrieval und konventioneller Suche bestehet.
    Content
    Enthält die Beiträge: Buchbinder, R.: Grundlagen von Biblio-Data (S.11-68); Weidemüller, H.U.: Biblio-Data in der Sacherschließung der Deutschen Bibliothek (S.69-105); Tiedemann, E.: Biblio-Data in der bibliographischen Auskunft der Deutschen Bibliothek (S.107-123)
  10. Matthews, J.R.; Parker, M.R.: Local Area Networks and Wide Area Networks for libraries (1995) 0.02
    0.0155316 = product of:
      0.1087212 = sum of:
        0.08998495 = weight(_text_:wide in 2656) [ClassicSimilarity], result of:
          0.08998495 = score(doc=2656,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.685348 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=2656)
        0.018736245 = product of:
          0.056208733 = sum of:
            0.056208733 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
              0.056208733 = score(doc=2656,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.5416616 = fieldWeight in 2656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2656)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    30.11.1995 20:53:22
  11. Hodge, G.: Systems of knowledge organization for digital libraries : beyond traditional authority files (2000) 0.01
    0.0145818265 = product of:
      0.06804852 = sum of:
        0.03856498 = weight(_text_:wide in 4723) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4723,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4723)
        0.020922182 = weight(_text_:web in 4723) [ClassicSimilarity], result of:
          0.020922182 = score(doc=4723,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 4723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4723)
        0.00856136 = weight(_text_:information in 4723) [ClassicSimilarity], result of:
          0.00856136 = score(doc=4723,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 4723, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4723)
      0.21428572 = coord(3/14)
    
    Abstract
    Access of digital materials continues to be an issue of great significance in the development of digital libraries. The proliferation of information in the networked digital environment poses challenges as well as opportunities. The author reports on a wide array of activities in the field. While this publication is not intended to be exhaustive, the reader will find, in a single work, an overview of systems of knowledge organization and pertinent examples of their application to digital materials
    Content
    (1) Knowledge organization systems: an overview; (2) Linking digital library resources to related resources; (3) Making resources accessible to other communities; (4) Planning and implementing knowledge organization systems in digital libraries; (5) The future of knowledge organization systems on the Web
    Imprint
    Washington, DC : The Digital Library Federation; Council on Library and Information resources
  12. Report on the future of bibliographic control : draft for public comment (2007) 0.01
    0.012628233 = product of:
      0.058931753 = sum of:
        0.033398256 = weight(_text_:wide in 1271) [ClassicSimilarity], result of:
          0.033398256 = score(doc=1271,freq=6.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.2543695 = fieldWeight in 1271, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.018119143 = weight(_text_:web in 1271) [ClassicSimilarity], result of:
          0.018119143 = score(doc=1271,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18735787 = fieldWeight in 1271, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.007414355 = weight(_text_:information in 1271) [ClassicSimilarity], result of:
          0.007414355 = score(doc=1271,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.14252704 = fieldWeight in 1271, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
      0.21428572 = coord(3/14)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
  13. Multilingual information management : current levels and future abilities. A report Commissioned by the US National Science Foundation and also delivered to the European Commission's Language Engineering Office and the US Defense Advanced Research Projects Agency, April 1999 (1999) 0.01
    0.011325196 = product of:
      0.052850917 = sum of:
        0.013948122 = weight(_text_:web in 6068) [ClassicSimilarity], result of:
          0.013948122 = score(doc=6068,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 6068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=6068)
        0.01210759 = weight(_text_:information in 6068) [ClassicSimilarity], result of:
          0.01210759 = score(doc=6068,freq=18.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274568 = fieldWeight in 6068, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6068)
        0.026795205 = weight(_text_:retrieval in 6068) [ClassicSimilarity], result of:
          0.026795205 = score(doc=6068,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.29892567 = fieldWeight in 6068, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=6068)
      0.21428572 = coord(3/14)
    
    Abstract
    Over the past 50 years, a variety of language-related capabilities has been developed in machine translation, information retrieval, speech recognition, text summarization, and so on. These applications rest upon a set of core techniques such as language modeling, information extraction, parsing, generation, and multimedia planning and integration; and they involve methods using statistics, rules, grammars, lexicons, ontologies, training techniques, and so on. It is a puzzling fact that although all of this work deals with language in some form or other, the major applications have each developed a separate research field. For example, there is no reason why speech recognition techniques involving n-grams and hidden Markov models could not have been used in machine translation 15 years earlier than they were, or why some of the lexical and semantic insights from the subarea called Computational Linguistics are still not used in information retrieval.
    This picture will rapidly change. The twin challenges of massive information overload via the web and ubiquitous computers present us with an unavoidable task: developing techniques to handle multilingual and multi-modal information robustly and efficiently, with as high quality performance as possible. The most effective way for us to address such a mammoth task, and to ensure that our various techniques and applications fit together, is to start talking across the artificial research boundaries. Extending the current technologies will require integrating the various capabilities into multi-functional and multi-lingual natural language systems. However, at this time there is no clear vision of how these technologies could or should be assembled into a coherent framework. What would be involved in connecting a speech recognition system to an information retrieval engine, and then using machine translation and summarization software to process the retrieved text? How can traditional parsing and generation be enhanced with statistical techniques? What would be the effect of carefully crafted lexicons on traditional information retrieval? At which points should machine translation be interleaved within information retrieval systems to enable multilingual processing?
  14. Internetzugang in Öffentlichen Bibliotheken : Strukturierungsbedarf und -möglichkeiten beim Online-Zugang zu Information und Wissen: BINE (Bibliothek + Internet = Navigation + Erschließung) (1999) 0.01
    0.011190011 = product of:
      0.07833008 = sum of:
        0.06622249 = weight(_text_:bibliothek in 4032) [ClassicSimilarity], result of:
          0.06622249 = score(doc=4032,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.54432154 = fieldWeight in 4032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.09375 = fieldNorm(doc=4032)
        0.012107591 = weight(_text_:information in 4032) [ClassicSimilarity], result of:
          0.012107591 = score(doc=4032,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 4032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4032)
      0.14285715 = coord(2/14)
    
  15. Pfaff, S.: ¬Die Entwicklung eines Hypertextdokumentes als Informationsdienstleistung der Bibliothek und Dokumentation des Deutschen Elektronen-Synchrotron DESY im Internet (1994) 0.01
    0.010550044 = product of:
      0.073850304 = sum of:
        0.06243516 = weight(_text_:bibliothek in 5628) [ClassicSimilarity], result of:
          0.06243516 = score(doc=5628,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.5131913 = fieldWeight in 5628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=5628)
        0.011415146 = weight(_text_:information in 5628) [ClassicSimilarity], result of:
          0.011415146 = score(doc=5628,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 5628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5628)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Internet und insbesondere das Internet Werkzeug WWW haben wachsenden Einfluß im wissenschaftlichen Bereich. Auch Bibliotheken haben begonnen, ihre Dieste in dieses Computernetz auszudehnen. Für Bibliothek und Dokumentation des DESY wurde eine Hypertext-Informationsdokument angefertigt und über das WWW bereitgestellt
    Footnote
    [Abschlußarbeit im Lehrbegiet Information Management des Instituts für Information und Dokumentation an der FH Potsdam]
  16. Sykes, J.: ¬The value of indexing : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2001) 0.01
    0.009790596 = product of:
      0.045689445 = sum of:
        0.019725623 = weight(_text_:web in 720) [ClassicSimilarity], result of:
          0.019725623 = score(doc=720,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2039694 = fieldWeight in 720, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=720)
        0.013980643 = weight(_text_:information in 720) [ClassicSimilarity], result of:
          0.013980643 = score(doc=720,freq=24.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2687516 = fieldWeight in 720, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=720)
        0.0119831795 = weight(_text_:retrieval in 720) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=720,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=720)
      0.21428572 = coord(3/14)
    
    Abstract
    Finding particular documents after they have been reviewed and stored has been a challenge since the advent of the printed word. "Findability" is emphatically more important as we deal with information overload in general and with the specific need to quickly find relevant background information to support business decisions in a networked environment. Because time is arguably the most valuable asset in today's economy, information users value tools that help them (1) quickly find the information they are seeking and (2) manage the quantity and quality of information they manipulate and work with on a regular basis. Although the term "indexing" may lack the cachet of some other terms we use to describe current information organization and management concepts, indexing is fundamental to precise information organization and retrieval, especially when dealing with large sets of documents. Power users find great value in using a known, granular indexing language that can surface the most relevant items and filter out items of peripheral or no interest. Web architects and interface designers can likewise take advantage of indexing labels to present only the information meeting certain requirements for users who do not wish to learn the indexing structure or taxonomy. The user finds what is needed while the indexing language is used behind the scenes and is transparent to the user.
    The importance of indexing in developing a content navigation strategy for corporate intranets or portals and the value of high-quality indexing when retrieving information from external resources are reviewed in this white paper. Some general background information on indexing and the use of controlled vocabularies (or taxonomies) are included for a historical perspective. Factiva Intelligent Indexing-which incorporates the best indexing expertise from both Dow Jones Interactive and Reuters Business Briefing-is described, along with some novel customer applications that take advantage of Factiva's indexing to create or improve information products delivered to users. Examples from the Excite and Google web search engines and from Dow Jones Interactive and Reuters Business Briefing are included in an Appendix section to illustrate how indexing influences the amount and quality of information retrieved in a specific search.
  17. Feldsien-Sudhaus, I.; Horst, D.; Katzner, E.; Rajski, B.; Weier, H.; Zeumer, T. (Red.): Koha-Evaluation durch die Universitätsbibliothek der TUHH (2015) 0.01
    0.009171702 = product of:
      0.064201914 = sum of:
        0.03660921 = weight(_text_:elektronische in 2811) [ClassicSimilarity], result of:
          0.03660921 = score(doc=2811,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.2612416 = fieldWeight in 2811, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2811)
        0.027592704 = weight(_text_:bibliothek in 2811) [ClassicSimilarity], result of:
          0.027592704 = score(doc=2811,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.22680065 = fieldWeight in 2811, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2811)
      0.14285715 = coord(2/14)
    
    Abstract
    Die Universitätsbibliothek der TUHH setzt seit 1996 auf die Softwareunterstützung für Ausleihe und Katalog. Zunächst wurde das integrierte Bibliothekssystem SISIS eingesetzt, 2000 erfolgte der Wechsel auf das Lokale Bibliothekssystem LBS4 von PICA, heute OCLC. Kurz darauf wurde die Erwerbung integriert. Seit Ende der 90er Jahre wird in der Bibliothek Selbstverbuchung angeboten, seit 2011 mit RFID. 2012 wurde der Nutzer-Katalog des LBS4 durch TUBfind, ein Discovery System auf vufind-Basis, abgelöst. Keine Softwarelösung gibt es bisher für die Verwaltung elektronischer Ressourcen mit Ausnahme des Linkresolvers SFX von Exlibris, der seit 2006 eingesetzt wird. Die TUB ist Mitglied im Gemeinsamen Bibliotheksverbund (GBV) und nutzt die zentrale Katalogisierungsdatenbank CBS. Das LBS4 hat in den letzten 15 Jahren kaum funktionale Erweiterungen erfahren. Funktionen für die Verwaltung für Elektronische Ressourcen fehlen genauso wie Standardschnittstellen, z.B. zu LDAP-Directories. Mittelfristig muss das LBS4 im GBV und damit auch an der TUB durch ein anderes Bibliotheksmanagementsystem (BMS) abgelöst werden. Die Verbundzentrale des GBV (VZG) testet daher zusammen mit der FAG Lokale Geschäftsgänge und dem HBZ das noch in der Entwicklung befindliche Kuali OLE, ein Cloud- und Community-basiertes BMS aus den USA. In diesem Bericht werden unsere Projektergebnisse dargestellt, Vergleiche zum LBS4 gezogen und offene Fragen festgehalten.
  18. Fuhr, N.: Hypertext und Information-Retrieval (1990) 0.01
    0.009153739 = product of:
      0.06407617 = sum of:
        0.016143454 = weight(_text_:information in 4473) [ClassicSimilarity], result of:
          0.016143454 = score(doc=4473,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3103276 = fieldWeight in 4473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=4473)
        0.047932718 = weight(_text_:retrieval in 4473) [ClassicSimilarity], result of:
          0.047932718 = score(doc=4473,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 4473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=4473)
      0.14285715 = coord(2/14)
    
  19. Belkin, N.J.; Vickery, A.: Interaction in information systems : a review of research from document retrieval to knowledge-based systems (1985) 0.01
    0.008845377 = product of:
      0.061917633 = sum of:
        0.019976506 = weight(_text_:information in 3295) [ClassicSimilarity], result of:
          0.019976506 = score(doc=3295,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3840108 = fieldWeight in 3295, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3295)
        0.04194113 = weight(_text_:retrieval in 3295) [ClassicSimilarity], result of:
          0.04194113 = score(doc=3295,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 3295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=3295)
      0.14285715 = coord(2/14)
    
    Series
    Library and information science report; 35
  20. Sykes, J.: Making solid business decisions through intelligent indexing taxonomies : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2003) 0.01
    0.008118262 = product of:
      0.03788522 = sum of:
        0.013948122 = weight(_text_:web in 721) [ClassicSimilarity], result of:
          0.013948122 = score(doc=721,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=721)
        0.0069903214 = weight(_text_:information in 721) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=721,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 721, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=721)
        0.016946774 = weight(_text_:retrieval in 721) [ClassicSimilarity], result of:
          0.016946774 = score(doc=721,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.18905719 = fieldWeight in 721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=721)
      0.21428572 = coord(3/14)
    
    Abstract
    In 2000, Factiva published "The Value of Indexing," a white paper emphasizing the strategic importance of accurate categorization, based on a robust taxonomy for later retrieval of documents stored in commercial or in-house content repositories. Since that time, there has been resounding agreement between persons who use Web-based systems and those who design these systems that search engines alone are not the answer for effective information retrieval. High-quality categorization is crucial if users are to be able to find the right answers in repositories of articles and documents that are expanding at phenomenal rates. Companies continue to invest in technologies that will help them organize and integrate their content. A March 2002 article in EContent suggests a typical taxonomy implementation usually costs around $100,000. The article also cites a Merrill Lynch study that predicts the market for search and categorization products, now at about $600 million, will more than double by 2005. Classification activities are not new. In the third century B.C., Callimachus of Cyrene managed the ancient Library of Alexandria. To help scholars find items in the collection, he created an index of all the scrolls organized according to a subject taxonomy. Factiva's parent companies, Dow Jones and Reuters, each have more than 20 years of experience with developing taxonomies and painstaking manual categorization processes and also have a solid history with automated categorization techniques. This experience and expertise put Factiva at the leading edge of developing and applying categorization technology today. This paper will update readers about enhancements made to the Factiva Intelligent IndexingT taxonomy. It examines the value these enhancements bring to Factiva's news and business information service, and the value brought to clients who license the Factiva taxonomy as a fundamental component of their own Enterprise Knowledge Architecture. There is a behind-the-scenes-look at how Factiva classifies a huge stream of incoming articles published in a variety of formats and languages. The paper concludes with an overview of new Factiva services and solutions that are designed specifically to help clients improve productivity and make solid business decisions by precisely finding information in their own everexpanding content repositories.

Authors

Years

Languages

  • e 135
  • d 58

Types

  • el 27
  • m 2
  • s 2
  • x 2
  • d 1
  • More… Less…