Search (893 results, page 1 of 45)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.12
    0.12134893 = product of:
      0.42472124 = sum of:
        0.10618031 = product of:
          0.31854093 = sum of:
            0.31854093 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.31854093 = score(doc=1826,freq=2.0), product of:
                0.34006837 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04011181 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.31854093 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31854093 = score(doc=1826,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2857143 = coord(2/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.10
    0.09707914 = product of:
      0.339777 = sum of:
        0.08494425 = product of:
          0.25483274 = sum of:
            0.25483274 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.25483274 = score(doc=230,freq=2.0), product of:
                0.34006837 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04011181 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.25483274 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.25483274 = score(doc=230,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.2857143 = coord(2/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.06
    0.060674466 = product of:
      0.21236062 = sum of:
        0.053090155 = product of:
          0.15927047 = sum of:
            0.15927047 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15927047 = score(doc=4388,freq=2.0), product of:
                0.34006837 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04011181 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15927047 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15927047 = score(doc=4388,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2857143 = coord(2/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Ledl, A.: Demonstration of the BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.06
    0.055147573 = product of:
      0.09650825 = sum of:
        0.035515495 = weight(_text_:systems in 2038) [ClassicSimilarity], result of:
          0.035515495 = score(doc=2038,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 2038, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 2038) [ClassicSimilarity], result of:
              0.018450128 = score(doc=2038,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 2038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2038)
          0.5 = coord(1/2)
        0.025998589 = weight(_text_:library in 2038) [ClassicSimilarity], result of:
          0.025998589 = score(doc=2038,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.24650425 = fieldWeight in 2038, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.025769096 = product of:
          0.05153819 = sum of:
            0.05153819 = weight(_text_:applications in 2038) [ClassicSimilarity], result of:
              0.05153819 = score(doc=2038,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2918479 = fieldWeight in 2038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2038)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC, http://bartoc.org) is a bibliographic database aiming to record metadata of as many Knowledge Organization Systems as possible. It has a facetted, responsive web design search interface in 20 EU languages. With more than 1'300 interdisciplinary items in 77 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and it is still growing. This being said, the demonstration of BARTOC would be suitable for topic nr. 10 [Multilingual and Interdisciplinary KOS applications and tools]. BARTOC has been developed by the University Library of Basel, Switzerland. It is rooted in the tradition of library and information science of collecting bibliographic records of controlled and structured vocabularies, yet in a more contemporary manner. BARTOC is based on the open source content management system Drupal 7.
    Content
    Vortrag anlässlich: 14th European Networked Knowledge Organization Systems (NKOS) Workshop, TPDL 2015 Conference in Poznan, Poland, Friday 18th September 2015. Vgl. auch: http://bartoc.org/.
  5. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.05
    0.05311727 = product of:
      0.18591045 = sum of:
        0.0502265 = weight(_text_:systems in 4820) [ClassicSimilarity], result of:
          0.0502265 = score(doc=4820,freq=8.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.4074492 = fieldWeight in 4820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.13568395 = sum of:
          0.10307638 = weight(_text_:applications in 4820) [ClassicSimilarity], result of:
            0.10307638 = score(doc=4820,freq=8.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.5836958 = fieldWeight in 4820, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.032607578 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.032607578 = score(doc=4820,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.2857143 = coord(2/7)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  6. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.05
    0.045872055 = product of:
      0.080276094 = sum of:
        0.03588354 = weight(_text_:systems in 3062) [ClassicSimilarity], result of:
          0.03588354 = score(doc=3062,freq=12.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.29109573 = fieldWeight in 3062, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.0053812875 = product of:
          0.010762575 = sum of:
            0.010762575 = weight(_text_:science in 3062) [ClassicSimilarity], result of:
              0.010762575 = score(doc=3062,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.101861134 = fieldWeight in 3062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
        0.023979302 = weight(_text_:library in 3062) [ClassicSimilarity], result of:
          0.023979302 = score(doc=3062,freq=10.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.22735849 = fieldWeight in 3062, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.015031973 = product of:
          0.030063946 = sum of:
            0.030063946 = weight(_text_:applications in 3062) [ClassicSimilarity], result of:
              0.030063946 = score(doc=3062,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17024462 = fieldWeight in 3062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  7. Kashyap, M.M.: Application of integrative approach in the teaching of library science techniques and application of information technology (2011) 0.04
    0.04228082 = product of:
      0.098655246 = sum of:
        0.03743662 = weight(_text_:systems in 4395) [ClassicSimilarity], result of:
          0.03743662 = score(doc=4395,freq=10.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.3036947 = fieldWeight in 4395, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
        0.013751915 = product of:
          0.02750383 = sum of:
            0.02750383 = weight(_text_:science in 4395) [ClassicSimilarity], result of:
              0.02750383 = score(doc=4395,freq=10.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.26030678 = fieldWeight in 4395, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
        0.04746671 = weight(_text_:library in 4395) [ClassicSimilarity], result of:
          0.04746671 = score(doc=4395,freq=30.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.45005313 = fieldWeight in 4395, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
      0.42857143 = coord(3/7)
    
    Abstract
    Today many libraries are using computers and allied information technologies to improve their work methods and services. Consequently, the libraries need such professional staff, or need to train the present one, who could face the challenges placed by the introduction of these technologies in the libraries. To meet the demand of such professional staff, the departments of Library and Information Science in India introduced new courses of studies to expose their students in the use and application of computers and other allied technologies. Some courses introduced are: Computer Application in Libraries; Systems Analysis and Design Technique; Design and Development of Computer-based Library Information Systems; Database Organisation and Design; Library Networking; Use and Application of Communication Technology, and so forth. It is felt that the computer and information technologies biased courses need to be restructured, revised, and more harmoniously blended with the traditional main stream courses of library and information science discipline. We must alter the strategy of teaching library techniques, such as classification, cataloguing, and library procedures, and the techniques of designing computer-based library information systems and services. The use and application of these techniques get interwoven when we shift from a manually operated library system's environment to computer-based library system's environment. As such, it becomes necessary that we must follow an integrative approach, when we teach these techniques to the students of library and information science or train library staff in the use and application of these techniques to design, develop and implement computer-based library information systems and services. In the following sections of this paper, we shall outline the likeness or correspondence between certain concepts and techniques formed by computer specialist and the one developed by the librarians, in their respective domains. We make use of these techniques (i.e. the techniques of both the domains) in the design and implementation of computer-based library information systems and services. As such, it is essential that lessons of study concerning the exposition of these supplementary and complementary techniques must be integrated.
    Source
    http://lisuncg.net/icl/blogs-news/madan-mohan-kashyap/2011/01/20/application-integrative-approach-teaching-library-science-
  8. Chen, H.: Semantic research for digital libraries (1999) 0.04
    0.041723944 = product of:
      0.09735587 = sum of:
        0.04349742 = weight(_text_:systems in 1247) [ClassicSimilarity], result of:
          0.04349742 = score(doc=1247,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.35286134 = fieldWeight in 1247, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 1247) [ClassicSimilarity], result of:
              0.018450128 = score(doc=1247,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 1247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
        0.044633385 = product of:
          0.08926677 = sum of:
            0.08926677 = weight(_text_:applications in 1247) [ClassicSimilarity], result of:
              0.08926677 = score(doc=1247,freq=6.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.5054954 = fieldWeight in 1247, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  9. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.04
    0.04144902 = product of:
      0.07253578 = sum of:
        0.029596249 = weight(_text_:systems in 3628) [ClassicSimilarity], result of:
          0.029596249 = score(doc=3628,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.24009174 = fieldWeight in 3628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 3628) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=3628,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
        0.02166549 = weight(_text_:library in 3628) [ClassicSimilarity], result of:
          0.02166549 = score(doc=3628,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.2054202 = fieldWeight in 3628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.013586491 = product of:
          0.027172983 = sum of:
            0.027172983 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.027172983 = score(doc=3628,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  10. Banerjee, K.; Johnson, M.: Improving access to archival collections with automated entity extraction (2015) 0.04
    0.040931392 = product of:
      0.09550658 = sum of:
        0.0513537 = sum of:
          0.018450128 = weight(_text_:science in 2144) [ClassicSimilarity], result of:
            0.018450128 = score(doc=2144,freq=2.0), product of:
              0.10565929 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.04011181 = queryNorm
              0.17461908 = fieldWeight in 2144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2144)
          0.03290357 = weight(_text_:29 in 2144) [ClassicSimilarity], result of:
            0.03290357 = score(doc=2144,freq=2.0), product of:
              0.14110081 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04011181 = queryNorm
              0.23319192 = fieldWeight in 2144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2144)
        0.018383777 = weight(_text_:library in 2144) [ClassicSimilarity], result of:
          0.018383777 = score(doc=2144,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 2144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2144)
        0.025769096 = product of:
          0.05153819 = sum of:
            0.05153819 = weight(_text_:applications in 2144) [ClassicSimilarity], result of:
              0.05153819 = score(doc=2144,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2918479 = fieldWeight in 2144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2144)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The complexity and diversity of archival resources make constructing rich metadata records time consuming and expensive, which in turn limits access to these valuable materials. However, significant automation of the metadata creation process would dramatically reduce the cost of providing access points, improve access to individual resources, and establish connections between resources that would otherwise remain unknown. Using a case study at Oregon Health & Science University as a lens to examine the conceptual and technical challenges associated with automated extraction of access points, we discuss using publically accessible API's to extract entities (i.e. people, places, concepts, etc.) from digital and digitized objects. We describe why Linked Open Data is not well suited for a use case such as ours. We conclude with recommendations about how this method can be used in archives as well as for other library applications.
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]
  11. Plotkin, R.C.; Schwartz, M.S.: Data modeling for news clip archive : a prototype solution (1997) 0.04
    0.040827904 = product of:
      0.095265105 = sum of:
        0.04349742 = weight(_text_:systems in 1259) [ClassicSimilarity], result of:
          0.04349742 = score(doc=1259,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.35286134 = fieldWeight in 1259, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1259)
        0.025998589 = weight(_text_:library in 1259) [ClassicSimilarity], result of:
          0.025998589 = score(doc=1259,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.24650425 = fieldWeight in 1259, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1259)
        0.025769096 = product of:
          0.05153819 = sum of:
            0.05153819 = weight(_text_:applications in 1259) [ClassicSimilarity], result of:
              0.05153819 = score(doc=1259,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2918479 = fieldWeight in 1259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1259)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Film, videotape and multimedia archive systems must address the issues of editing, authoring and searching at the media (i.e. tape) or sub media (i.e. scene) level in addition to the traditional inventory management capabilities associated with the physical media. This paper describes a prototype of a database design for the storage, search and retrieval of multimedia and its related information. It also provides a process by which legacy data can be imported to this schema. The Continuous Media Index, or Comix system is the name of the prototype. An implementation of such a digital library solution incorporates multimedia objects, hierarchical relationships and timecode in addition to traditional attribute data. Present video and multimedia archive systems are easily migrated to this architecture. Comix was implemented for a videotape archiving system. It was written for, and implemented using IBM Digital Library version 1.0. A derivative of Comix is currently in development for customer specific applications. Principles of the Comix design as well as the importation methods are not specific to the underlying systems used.
  12. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.04
    0.040116034 = product of:
      0.09360408 = sum of:
        0.047353994 = weight(_text_:systems in 541) [ClassicSimilarity], result of:
          0.047353994 = score(doc=541,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.38414678 = fieldWeight in 541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.024511702 = weight(_text_:library in 541) [ClassicSimilarity], result of:
          0.024511702 = score(doc=541,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.23240642 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.021738386 = product of:
          0.04347677 = sum of:
            0.04347677 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
              0.04347677 = score(doc=541,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.30952093 = fieldWeight in 541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=541)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:46
  13. Furner, J.: User tagging of library resources : toward a framework for system evaluation (2007) 0.04
    0.039889134 = product of:
      0.09307465 = sum of:
        0.035515495 = weight(_text_:systems in 703) [ClassicSimilarity], result of:
          0.035515495 = score(doc=703,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 703, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=703)
        0.016451785 = product of:
          0.03290357 = sum of:
            0.03290357 = weight(_text_:29 in 703) [ClassicSimilarity], result of:
              0.03290357 = score(doc=703,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23319192 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=703)
          0.5 = coord(1/2)
        0.041107375 = weight(_text_:library in 703) [ClassicSimilarity], result of:
          0.041107375 = score(doc=703,freq=10.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.38975742 = fieldWeight in 703, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=703)
      0.42857143 = coord(3/7)
    
    Abstract
    Although user tagging of library resources shows substantial promise as a means of improving the quality of users' access to those resources, several important questions about the level and nature of the warrant for basing retrieval tools on user tagging are yet to receive full consideration by library practitioners and researchers. Among these is the simple evaluative question: What, specifically, are the factors that determine whether or not user-tagging services will be successful? If success is to be defined in terms of the effectiveness with which systems perform the particular functions expected of them (rather than simply in terms of popularity), an understanding is needed both of the multifunctional nature of tagging tools, and of the complex nature of users' mental models of that multifunctionality. In this paper, a conceptual framework is developed for the evaluation of systems that integrate user tagging with more traditional methods of library resource description.
    Content
    Vortrag anlässlich: WORLD LIBRARY AND INFORMATION CONGRESS: 73RD IFLA GENERAL CONFERENCE AND COUNCIL 19-23 August 2007, Durban, South Africa. - 157 - Classification and Indexing
    Date
    26.12.2011 13:29:31
  14. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.04
    0.036402106 = product of:
      0.08493824 = sum of:
        0.04349742 = weight(_text_:systems in 1967) [ClassicSimilarity], result of:
          0.04349742 = score(doc=1967,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.35286134 = fieldWeight in 1967, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.018383777 = weight(_text_:library in 1967) [ClassicSimilarity], result of:
          0.018383777 = score(doc=1967,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.023057042 = product of:
          0.046114083 = sum of:
            0.046114083 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.046114083 = score(doc=1967,freq=4.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  15. Shiri, A.: Trend analysis in social tagging : an LIS perspective (2007) 0.04
    0.03607105 = product of:
      0.08416578 = sum of:
        0.047353994 = weight(_text_:systems in 529) [ClassicSimilarity], result of:
          0.047353994 = score(doc=529,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.38414678 = fieldWeight in 529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=529)
        0.012300085 = product of:
          0.02460017 = sum of:
            0.02460017 = weight(_text_:science in 529) [ClassicSimilarity], result of:
              0.02460017 = score(doc=529,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23282544 = fieldWeight in 529, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=529)
          0.5 = coord(1/2)
        0.024511702 = weight(_text_:library in 529) [ClassicSimilarity], result of:
          0.024511702 = score(doc=529,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.23240642 = fieldWeight in 529, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=529)
      0.42857143 = coord(3/7)
    
    Abstract
    The aim of the present study was to identify and categorize social tagging trends and developments as revealed by the analysis of library and information science scholarly and professional literature.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  16. Priss, U.: Description logic and faceted knowledge representation (1999) 0.03
    0.033507854 = product of:
      0.07818499 = sum of:
        0.04349742 = weight(_text_:systems in 2655) [ClassicSimilarity], result of:
          0.04349742 = score(doc=2655,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.35286134 = fieldWeight in 2655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.018383777 = weight(_text_:library in 2655) [ClassicSimilarity], result of:
          0.018383777 = score(doc=2655,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.016303789 = product of:
          0.032607578 = sum of:
            0.032607578 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.032607578 = score(doc=2655,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  17. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.03
    0.03146749 = product of:
      0.073424146 = sum of:
        0.02174871 = weight(_text_:systems in 1195) [ClassicSimilarity], result of:
          0.02174871 = score(doc=1195,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.17643067 = fieldWeight in 1195, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.02567685 = sum of:
          0.009225064 = weight(_text_:science in 1195) [ClassicSimilarity], result of:
            0.009225064 = score(doc=1195,freq=2.0), product of:
              0.10565929 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.04011181 = queryNorm
              0.08730954 = fieldWeight in 1195, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1195)
          0.016451785 = weight(_text_:29 in 1195) [ClassicSimilarity], result of:
            0.016451785 = score(doc=1195,freq=2.0), product of:
              0.14110081 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04011181 = queryNorm
              0.11659596 = fieldWeight in 1195, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1195)
        0.025998589 = weight(_text_:library in 1195) [ClassicSimilarity], result of:
          0.025998589 = score(doc=1195,freq=16.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.24650425 = fieldWeight in 1195, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
      0.42857143 = coord(3/7)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  18. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.03
    0.03135533 = product of:
      0.07316244 = sum of:
        0.016742166 = weight(_text_:systems in 4639) [ClassicSimilarity], result of:
          0.016742166 = score(doc=4639,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1358164 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.010967856 = product of:
          0.021935713 = sum of:
            0.021935713 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.021935713 = score(doc=4639,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
        0.045452412 = product of:
          0.090904824 = sum of:
            0.090904824 = weight(_text_:applications in 4639) [ClassicSimilarity], result of:
              0.090904824 = score(doc=4639,freq=14.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.51477134 = fieldWeight in 4639, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  19. BARTOC : the BAsel Register of Thesauri, Ontologies & Classifications 0.03
    0.03016845 = product of:
      0.07039305 = sum of:
        0.02929879 = weight(_text_:systems in 1734) [ClassicSimilarity], result of:
          0.02929879 = score(doc=1734,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.23767869 = fieldWeight in 1734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1734)
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 1734) [ClassicSimilarity], result of:
              0.02152515 = score(doc=1734,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 1734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1734)
          0.5 = coord(1/2)
        0.030331684 = weight(_text_:library in 1734) [ClassicSimilarity], result of:
          0.030331684 = score(doc=1734,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.28758827 = fieldWeight in 1734, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1734)
      0.42857143 = coord(3/7)
    
    Abstract
    BARTOC, http://bartoc.org, is a bibliographic database that provides metadata of as many Knowledge Organization Systems (KOS) as possible and offers a faceted, responsive web design search interface in 20 languages. With more than 1100 interdisciplinary items (Thesauri, Ontologies, Classifications, Glossaries, Controlled Vocabularies, Taxonomies) in 70 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and will still be growing. Metadata are being enriched with DDC-numbers down to the third level, and subject headings from EuroVoc, the EU's multilingual thesaurus. BARTOC has been developed by the University Library of Basel, Switzerland, and continues in the tradition of library and information science to collect bibliographic records of controlled and structured vocabularies.
  20. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.03
    0.030053705 = product of:
      0.07012531 = sum of:
        0.02929879 = weight(_text_:systems in 267) [ClassicSimilarity], result of:
          0.02929879 = score(doc=267,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.23767869 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 267) [ClassicSimilarity], result of:
              0.02152515 = score(doc=267,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
        0.030063946 = product of:
          0.06012789 = sum of:
            0.06012789 = weight(_text_:applications in 267) [ClassicSimilarity], result of:
              0.06012789 = score(doc=267,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.34048924 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
    Source
    Journal of Intelligent Information Systems

Years

Languages

  • e 634
  • d 231
  • a 8
  • i 4
  • el 2
  • f 2
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 432
  • i 27
  • r 18
  • s 14
  • m 13
  • x 12
  • n 8
  • p 7
  • b 6
  • More… Less…

Themes