Search (10 results, page 1 of 1)

  • × author_ss:"Eckert, K."
  1. Eckert, K.; Schulz, A.: SABINE: OPAC oder opak? : kein Durchblick beim neuen Online Public Access Catalogue der Universität des Saarlandes (1995) 0.00
    0.0030069877 = product of:
      0.02706289 = sum of:
        0.017079504 = weight(_text_:der in 2824) [ClassicSimilarity], result of:
          0.017079504 = score(doc=2824,freq=4.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.34902605 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=2824)
        0.009983385 = product of:
          0.029950155 = sum of:
            0.029950155 = weight(_text_:29 in 2824) [ClassicSimilarity], result of:
              0.029950155 = score(doc=2824,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.38865322 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2824)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Kritischer Bericht über den 1994 neu eingeführten OPAC der UB Saarbrücken (SABINE = SAarbrücker BIbliotheksNEtz)
    Source
    Bibliotheksdienst. 29(1995) H.6, S.979-984
  2. Eckert, K.; Albers, C.: Neue Dienstleistungsangebote wissenschaftlicher Bibliotheken in Europa (1995) 0.00
    0.0029536048 = product of:
      0.026582442 = sum of:
        0.0076788934 = weight(_text_:in in 3046) [ClassicSimilarity], result of:
          0.0076788934 = score(doc=3046,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.2576908 = fieldWeight in 3046, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3046)
        0.018903548 = weight(_text_:der in 3046) [ClassicSimilarity], result of:
          0.018903548 = score(doc=3046,freq=10.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.38630107 = fieldWeight in 3046, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3046)
      0.11111111 = coord(2/18)
    
    Abstract
    Die als LIB/2-Studie der EG-Kommission bekanntgewordene Untersuchung stellt eine umfassende Bestandsaufnahme des EDV-Einsatzes in den Öffentlichen und wissenschaftlichen Bibliotheken der Mitgliedsstaaten der Gemeinschaft dar. Sie wurde zweistufig in den Jahren 1986 bis 1993 durchgeführt und in Länderübersichten veröffentlicht. Im Frühjahr 1994 wurde im Auftrag des DBI eine Auswertung der LIB/2-Studie und ihres Updates im Hinblick auf das Angebot computerbasierter Dienstleistungen in den untersuchten wissenschaftlichen Bibliotheken vorgenommen. Die vorliegende Veröffentlichung ist eine Beschreibung des Entwicklungsstandes in Großbritannien, Irland, Dänemark, Niederlande, Deutschland, Portugal und Spanien. Ziel ist es, auf Grund der exemplarisch aufgezeigten Unterschiede, Stärken und Schwächen zu identifizieren, die ihrerseits wiederum für die Ableitung von Empfehlungen für die weitere Entwicklung im deutschen wissenschaftlichen Bibliothekswesen als Grundlage dienen können
  3. Eckert, K.: Linked Open Projects : Nachnutzung von Projektergebnissen als Linked Data (2010) 0.00
    0.0022302691 = product of:
      0.020072423 = sum of:
        0.005429798 = weight(_text_:in in 4278) [ClassicSimilarity], result of:
          0.005429798 = score(doc=4278,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1822149 = fieldWeight in 4278, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4278)
        0.014642626 = weight(_text_:der in 4278) [ClassicSimilarity], result of:
          0.014642626 = score(doc=4278,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.29922754 = fieldWeight in 4278, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4278)
      0.11111111 = coord(2/18)
    
    Abstract
    In vielen wissenschaftlichen Projekten - nicht nur im Bibliotheksbereich - geht es um die Erzeugung von Daten, häufig mit Hilfe automatischer Verfahren. Die Nachnutzung dieser Daten gestaltet sich häufig schwierig. In diesem Artikel werden wissenschaftliche Projekte beschrieben, die an der Universitätsbibliothek Mannheim durchgeführt wurden und werden. Anhand einfacher Beispiele wird gezeigt, wie durch Linked Data die Daten, die in diesen Projekten generiert werden, leicht und flexibel nachgenutzt werden können.
    Series
    Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis ; Bd. 14) (DGI-Konferenz ; 1
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  4. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.00
    0.0015689273 = product of:
      0.014120346 = sum of:
        0.0062054833 = weight(_text_:in in 4331) [ClassicSimilarity], result of:
          0.0062054833 = score(doc=4331,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.2082456 = fieldWeight in 4331, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.007914863 = product of:
          0.023744587 = sum of:
            0.023744587 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.023744587 = score(doc=4331,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
  5. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.00
    0.0014173498 = product of:
      0.012756148 = sum of:
        0.0067176316 = weight(_text_:in in 3222) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=3222,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 3222, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.0060385168 = weight(_text_:der in 3222) [ClassicSimilarity], result of:
          0.0060385168 = score(doc=3222,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.12339935 = fieldWeight in 3222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
      0.11111111 = coord(2/18)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
    Content
    Vgl. unter: http://ki.informatik.uni-mannheim.de/fileadmin/publication/Eckert07Thesis.pdf. Für die Software vgl.: http://www.semtinel.org. Zur Beschreibung der Software: https://ub-madoc.bib.uni-mannheim.de/29611/.
  6. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.00
    0.0013797963 = product of:
      0.012418167 = sum of:
        0.005429798 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.005429798 = score(doc=1528,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 1528) [ClassicSimilarity], result of:
              0.020965107 = score(doc=1528,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 1528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1528)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Date
    25. 2.2012 13:51:29
  7. Zhang, Y.; Wu, D.; Hagen, L.; Song, I.-Y.; Mostafa, J.; Oh, S.; Anderson, T.; Shah, C.; Bishop, B.W.; Hopfgartner, F.; Eckert, K.; Federer, L.; Saltz, J.S.: Data science curriculum in the iField (2023) 0.00
    0.001301036 = product of:
      0.011709324 = sum of:
        0.0067176316 = weight(_text_:in in 964) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=964,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 964, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=964)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 964) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=964,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=964)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Many disciplines, including the broad Field of Information (iField), offer Data Science (DS) programs. There have been significant efforts exploring an individual discipline's identity and unique contributions to the broader DS education landscape. To advance DS education in the iField, the iSchool Data Science Curriculum Committee (iDSCC) was formed and charged with building and recommending a DS education framework for iSchools. This paper reports on the research process and findings of a series of studies to address important questions: What is the iField identity in the multidisciplinary DS education landscape? What is the status of DS education in iField schools? What knowledge and skills should be included in the core curriculum for iField DS education? What are the jobs available for DS graduates from the iField? What are the differences between graduate-level and undergraduate-level DS education? Answers to these questions will not only distinguish an iField approach to DS education but also define critical components of DS curriculum. The results will inform individual DS programs in the iField to develop curriculum to support undergraduate and graduate DS education in their local context.
    Date
    12. 5.2023 14:29:42
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
  8. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.00
    3.4474907E-4 = product of:
      0.0062054833 = sum of:
        0.0062054833 = weight(_text_:in in 317) [ClassicSimilarity], result of:
          0.0062054833 = score(doc=317,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.2082456 = fieldWeight in 317, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.055555556 = coord(1/18)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Series
    Lecture notes in computer science ; 5173
  9. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.00
    2.585618E-4 = product of:
      0.0046541123 = sum of:
        0.0046541123 = weight(_text_:in in 989) [ClassicSimilarity], result of:
          0.0046541123 = score(doc=989,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1561842 = fieldWeight in 989, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.055555556 = coord(1/18)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  10. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi-automatic matching procedure for building up vocabulary crosswalks (2014) 0.00
    2.488012E-4 = product of:
      0.0044784215 = sum of:
        0.0044784215 = weight(_text_:in in 1371) [ClassicSimilarity], result of:
          0.0044784215 = score(doc=1371,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.15028831 = fieldWeight in 1371, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
      0.055555556 = coord(1/18)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated, high-quality search scenarios in distributed data environments where more than one controlled vocabulary is in use. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different online data sources. In the past, crosswalks between different thesauri have usually been developed manually. In the long run the intellectual updating of such crosswalks is expensive. An obvious solution would be to apply automatic matching procedures, such as the so-called ontology matching tools. On the basis of computer-generated correspondences between the Thesaurus for the Social Sciences (TSS) and the Thesaurus for Economics (STW), our contribution explores the trade-off between IT-assisted tools and procedures on the one hand and external quality evaluation by domain experts on the other hand. This paper presents techniques for semi-automatic development and maintenance of vocabulary crosswalks. The performance of multiple matching tools was first evaluated against a reference set of correct mappings, then the tools were used to generate new mappings. It was concluded that the ontology matching tools can be used effectively to speed up the work of domain experts. By optimizing the workflow, the method promises to facilitate sustained updating of high-quality vocabulary crosswalks.