Search (38 results, page 1 of 2)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.03
    0.028930439 = product of:
      0.072326094 = sum of:
        0.009437811 = weight(_text_:a in 541) [ClassicSimilarity], result of:
          0.009437811 = score(doc=541,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.06288828 = sum of:
          0.012630116 = weight(_text_:information in 541) [ClassicSimilarity], result of:
            0.012630116 = score(doc=541,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.050258167 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.050258167 = score(doc=541,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.4 = coord(2/5)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Date
    26.12.2011 13:22:46
  2. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.025825147 = product of:
      0.064562865 = sum of:
        0.009535614 = weight(_text_:a in 759) [ClassicSimilarity], result of:
          0.009535614 = score(doc=759,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 759, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.05502725 = sum of:
          0.011051352 = weight(_text_:information in 759) [ClassicSimilarity], result of:
            0.011051352 = score(doc=759,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13576832 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.043975897 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.043975897 = score(doc=759,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.4 = coord(2/5)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a
  3. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.024290197 = product of:
      0.06072549 = sum of:
        0.004086692 = weight(_text_:a in 4820) [ClassicSimilarity], result of:
          0.004086692 = score(doc=4820,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 4820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.0566388 = sum of:
          0.018945174 = weight(_text_:information in 4820) [ClassicSimilarity], result of:
            0.018945174 = score(doc=4820,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23274569 = fieldWeight in 4820, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.037693623 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.037693623 = score(doc=4820,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.4 = coord(2/5)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
    Type
    a
  4. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.01
    0.010368963 = product of:
      0.025922406 = sum of:
        0.01021673 = weight(_text_:a in 3628) [ClassicSimilarity], result of:
          0.01021673 = score(doc=3628,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19109234 = fieldWeight in 3628, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.031411353 = score(doc=3628,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  5. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.01
    0.0075387247 = product of:
      0.037693623 = sum of:
        0.037693623 = product of:
          0.07538725 = sum of:
            0.07538725 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.07538725 = score(doc=126,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  6. Vizine-Goetz, D.; Hickey, C.; Houghton, A.; Thompson, R.: Vocabulary mapping for terminology services (2004) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 918) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=918,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 918, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 918) [ClassicSimilarity], result of:
              0.013396261 = score(doc=918,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 918, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper describes a project to add value to controlled vocabularies by making inter-vocabulary associations. A methodology for mapping terms from one vocabulary to another is presented in the form of a case study applying the approach to the Educational Resources Information Center (ERIC) Thesaurus and the Library of Congress Subject Headings (LCSH). Our approach to mapping involves encoding vocabularies according to Machine-Readable Cataloging (MARC) standards, machine matching of vocabulary terms, and categorizing candidate mappings by likelihood of valid mapping. Mapping data is then stored as machine links. Vocabularies with associations to other schemes will be a key component of Web-based terminology services. The paper briefly describes how the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is used to provide access to a vocabulary with mappings.
    Footnote
    Teil eines Themenheftes von: Journal of digital information. 4(2004) no.4.
  7. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.01
    0.006282271 = product of:
      0.031411353 = sum of:
        0.031411353 = product of:
          0.06282271 = sum of:
            0.06282271 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.06282271 = score(doc=1287,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  8. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.01
    0.0061419047 = product of:
      0.015354762 = sum of:
        0.009829085 = weight(_text_:a in 542) [ClassicSimilarity], result of:
          0.009829085 = score(doc=542,freq=34.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1838419 = fieldWeight in 542, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=542)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 542) [ClassicSimilarity], result of:
              0.011051352 = score(doc=542,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 542, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).
    In the final phase of the project, a major evaluation effort is under way to test and measure the effectiveness of the vocabulary mappings in an information system environment. Actual user queries are tested in a distributed search environment, where several bibliographic databases with different controlled vocabularies are searched at the same time. Three query variations are compared to each other: a free-text search without focusing on using the controlled vocabulary or terminology mapping; a controlled vocabulary search, where terms from one vocabulary (a 'home' vocabulary thought to be familiar to the user of a particular database) are used to search all databases; and finally, a search, where controlled vocabulary terms are translated into the terms of the respective controlled vocabulary of the database. For evaluation purposes, types of cross-concordances are distinguished between intradisciplinary vocabularies (vocabularies within the social sciences) and interdisciplinary vocabularies (social sciences to other disciplines as well as other combinations). Simultaneously, an extensive quantitative analysis is conducted aimed at finding patterns in terminology mappings that can explain trends in the effectiveness of terminology mappings, particularly looking at overlapping terms, types of determined relations (equivalence, hierarchy etc.), size of participating vocabularies, etc. This project is the largest terminology mapping effort in Germany. The number and variety of controlled vocabularies targeted provide an optimal basis for insights and further research opportunities. To our knowledge, terminology mapping efforts have rarely been evaluated with stringent qualitative and quantitative measures. This research should contribute in this area. For the NKOS workshop, we plan to present an overview of the project and participating vocabularies, an introduction to the heterogeneity service and its application as well as some of the results and findings of the evaluation, which will be concluded in August.
  9. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.01
    0.005793653 = product of:
      0.014484132 = sum of:
        0.0048162127 = weight(_text_:a in 3654) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=3654,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 3654, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
        0.009667919 = product of:
          0.019335838 = sum of:
            0.019335838 = weight(_text_:information in 3654) [ClassicSimilarity], result of:
              0.019335838 = score(doc=3654,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23754507 = fieldWeight in 3654, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
    Type
    a
  10. Panzer, M.: Relationships, spaces, and the two faces of Dewey (2008) 0.01
    0.0056400322 = product of:
      0.01410008 = sum of:
        0.009363786 = weight(_text_:a in 2127) [ClassicSimilarity], result of:
          0.009363786 = score(doc=2127,freq=42.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17513901 = fieldWeight in 2127, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2127) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2127,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2127, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "When dealing with a large-scale and widely-used knowledge organization system like the Dewey Decimal Classification, we often tend to focus solely on the organization aspect, which is closely intertwined with editorial work. This is perfectly understandable, since developing and updating the DDC, keeping up with current scientific developments, spotting new trends in both scholarly communication and popular publishing, and figuring out how to fit those patterns into the structure of the scheme are as intriguing as they are challenging. From the organization perspective, the intended user of the scheme is mainly the classifier. Dewey acts very much as a number-building engine, providing richly documented concepts to help with classification decisions. Since the Middle Ages, quasi-religious battles have been fought over the "valid" arrangement of places according to specific views of the world, as parodied by Jorge Luis Borges and others. Organizing knowledge has always been primarily an ontological activity; it is about putting the world into the classification. However, there is another side to this coin--the discovery side. While the hierarchical organization of the DDC establishes a default set of places and neighborhoods that is also visible in the physical manifestation of library shelves, this is just one set of relationships in the DDC. A KOS (Knowledge Organization System) becomes powerful by expressing those other relationships in a manner that not only collocates items in a physical place but in a knowledge space, and exposes those other relationships in ways beneficial and congenial to the unique perspective of an information seeker.
    What are those "other" relationships that Dewey possesses and that seem so important to surface? Firstly, there is the relationship of concepts to resources. Dewey has been used for a long time, and over 200,000 numbers are assigned to information resources each year and added to WorldCat by the Library of Congress and the German National Library alone. Secondly, we have relationships between concepts in the scheme itself. Dewey provides a rich set of non-hierarchical relations, indicating other relevant and related subjects across disciplinary boundaries. Thirdly, perhaps most importantly, there is the relationship between the same concepts across different languages. Dewey has been translated extensively, and current versions are available in French, German, Hebrew, Italian, Spanish, and Vietnamese. Briefer representations of the top-three levels (the DDC Summaries) are available in several languages in the DeweyBrowser. This multilingual nature of the scheme allows searchers to access a broader range of resources or to switch the language of--and thus localize--subject metadata seamlessly. MelvilClass, a Dewey front-end developed by the German National Library for the German translation, could be used as a common interface to the DDC in any language, as it is built upon the standard DDC data format. It is not hard to give an example of the basic terminology of a class pulled together in a multilingual way: <class/794.8> a skos:Concept ; skos:notation "794.8"^^ddc:notation ; skos:prefLabel "Computer games"@en ; skos:prefLabel "Computerspiele"@de ; skos:prefLabel "Jeux sur ordinateur"@fr ; skos:prefLabel "Juegos por computador"@es .
    Expressed in such manner, the Dewey number provides a language-independent representation of a Dewey concept, accompanied by language-dependent assertions about the concept. This information, identified by a URI, can be easily consumed by semantic web agents and used in various metadata scenarios. Fourthly, as we have seen, it is important to play well with others, i.e., establishing and maintaining relationships to other KOS and making the scheme available in different formats. As noted in the Dewey blog post "Tags and Dewey," since no single scheme is ever going to be the be-all, end-all solution for knowledge discovery, DDC concepts have been extensively mapped to other vocabularies and taxonomies, sometimes bridging them and acting as a backbone, sometimes using them as additional access vocabulary to be able to do more work "behind the scenes." To enable other applications and schemes to make use of those relationships, the full Dewey database is available in XML format; RDF-based formats and a web service are forthcoming. Pulling those relationships together under a common surface will be the next challenge going forward. In the semantic web community the concept of Linked Data (http://en.wikipedia.org/wiki/Linked_Data) currently receives some attention, with its emphasis on exposing and connecting data using technologies like URIs, HTTP and RDF to improve information discovery on the web. With its focus on relationships and discovery, it seems that Dewey will be well prepared to become part of this big linked data set. Now it is about putting the classification back into the world!"
  11. Wake, S.; Nicholson, D.: HILT: High-Level Thesaurus Project : building consensus for interoperable subject access across communities (2001) 0.01
    0.005633802 = product of:
      0.014084505 = sum of:
        0.008615503 = weight(_text_:a in 1224) [ClassicSimilarity], result of:
          0.008615503 = score(doc=1224,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16114321 = fieldWeight in 1224, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1224)
        0.0054690014 = product of:
          0.010938003 = sum of:
            0.010938003 = weight(_text_:information in 1224) [ClassicSimilarity], result of:
              0.010938003 = score(doc=1224,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1343758 = fieldWeight in 1224, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1224)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article provides an overview of the work carried out by the HILT Project <http://hilt.cdlr.strath.ac.uk> in making recommendations towards interoperable subject access, or cross-searching and browsing distributed services amongst the archives, libraries, museums and electronic services sectors. The article details consensus achieved at the 19 June 2001 HILT Workshop and discusses the HILT Stakeholder Survey. In 1999 Péter Jascó wrote that "savvy searchers" are asking for direction. Three years later the scenario he describes, that of searchers cross-searching databases where the subject vocabulary used in each case is different, still rings true. Jascó states that, in many cases, databases do not offer the necessary aids required to use the "preferred terms of the subject-controlled vocabulary". The databases to which Jascó refers are Dialog and DataStar. However, the situation he describes applies as well to the area that HILT is researching: that of cross-searching and browsing by subject across databases and catalogues in archives, libraries, museums and online information services. So how does a user access information on a particular subject when it is indexed across a multitude of services under different, but quite often similar, subject terms? Also, if experienced searchers are having problems, what about novice searchers? As information professionals, it is our role to investigate such problems and recommend solutions. Although there is no hard empirical evidence one way or another, HILT participants agree that the problem for users attempting to search across databases is real. There is a strong likelihood that users are disadvantaged by the use of different subject terminology combined with a multitude of different practices taking place within the archive, library, museums and online communities. Arguably, failure to address this problem of interoperability undermines the value of cross-searching and browsing facilities, and wastes public money because relevant resources are 'hidden' from searchers. HILT is charged with analysing this broad problem through qualitative methods, with the main aim of presenting a set of recommendations on how to make it easier to cross-search and browse distributed services. Because this is a very large problem composed of many strands, HILT recognizes that any proposed solutions must address a host of issues. Recommended solutions must be affordable, sustainable, politically acceptable, useful, future-proof and international in scope. It also became clear to the HILT team that progress toward finding solutions to the interoperability problem could only be achieved through direct dialogue with other parties keen to solve this problem, and that the problem was as much about consensus building as it was about finding a solution. This article describes how HILT approached the cross-searching problem; how it investigated the nature of the problem, detailing results from the HILT Stakeholder Survey; and how it achieved consensus through the recent HILT Workshop.
    Type
    a
  12. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.01
    0.0056085153 = product of:
      0.014021289 = sum of:
        0.010114046 = weight(_text_:a in 553) [ClassicSimilarity], result of:
          0.010114046 = score(doc=553,freq=36.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18917176 = fieldWeight in 553, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 553) [ClassicSimilarity], result of:
              0.007814486 = score(doc=553,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 553, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=553)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    Content
    Präsentation anlässlich des 'UDC Seminar: Information Access for the Global Community, The Hague, 4-5 June 2007'
  13. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.01
    0.005593183 = product of:
      0.013982957 = sum of:
        0.005779455 = weight(_text_:a in 2323) [ClassicSimilarity], result of:
          0.005779455 = score(doc=2323,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 2323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 2323) [ClassicSimilarity], result of:
              0.016407004 = score(doc=2323,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 2323, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
  14. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.01
    0.00556948 = product of:
      0.0139237 = sum of:
        0.008341924 = weight(_text_:a in 5902) [ClassicSimilarity], result of:
          0.008341924 = score(doc=5902,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 5902, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5902)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 5902) [ClassicSimilarity], result of:
              0.011163551 = score(doc=5902,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 5902, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5902)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
    Source
    Journal of digital information. 1(2001) no.8,
    Type
    a
  15. Dextre Clarke, S.G.: Overview of ISO NP 25964 : structured vocabularies for information retrieval (2007) 0.01
    0.0051638708 = product of:
      0.012909677 = sum of:
        0.008173384 = weight(_text_:a in 535) [ClassicSimilarity], result of:
          0.008173384 = score(doc=535,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 535, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 535) [ClassicSimilarity], result of:
              0.009472587 = score(doc=535,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 535, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    ISO 2788 and ISO 5964, the international standards for monolingual and multilingual thesauri respectively dated 1986 and 1985, are very much in need of revision. A proposal to revise them was recently approved by the relevant subcommittee, ISO TC46/SC9. The work will be based on BS 8723, a five part standard of which Parts 1 and 2 were published in 2005, Parts 3 and 4 are scheduled for publication in 2007, and Part 5 is still in draft. This subsession will address aspects of the whole revision project. It is conceived as a panel session starting with a brief overview from the project leader. Then there are three presentations of 15 minutes, plus 5 minutes each for specific questions. At the end we have 20 minutes for questions to any or all of the panel, and discussion of issues from the workshop participants.
  16. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.01
    0.0050258166 = product of:
      0.025129084 = sum of:
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.050258167 = score(doc=7411,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    7.11.2008 10:40:22
  17. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.01
    0.0050258166 = product of:
      0.025129084 = sum of:
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.050258167 = score(doc=2227,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    7.11.2008 10:40:22
  18. Krötzsch, M.; Hitzler, P.; Ehrig, M.; Sure, Y.: Category theory in ontology research : concrete gain from an abstract approach (2004 (?)) 0.00
    0.004725861 = product of:
      0.011814652 = sum of:
        0.007078358 = weight(_text_:a in 4538) [ClassicSimilarity], result of:
          0.007078358 = score(doc=4538,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 4538, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 4538) [ClassicSimilarity], result of:
              0.009472587 = score(doc=4538,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 4538, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4538)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The focus of research on representing and reasoning with knowledge traditionally has been on single specifications and appropriate inference paradigms to draw conclusions from such data. Accordingly, this is also an essential aspect of ontology research which has received much attention in recent years. But ontologies introduce another new challenge based on the distributed nature of most of their applications, which requires to relate heterogeneous ontological specifications and to integrate information from multiple sources. These problems have of course been recognized, but many current approaches still lack the deep formal backgrounds on which todays reasoning paradigms are already founded. Here we propose category theory as a well-explored and very extensive mathematical foundation for modelling distributed knowledge. A particular prospect is to derive conclusions from the structure of those distributed knowledge bases, as it is for example needed when merging ontologies
    Type
    a
  19. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.00
    0.004709213 = product of:
      0.011773032 = sum of:
        0.008615503 = weight(_text_:a in 533) [ClassicSimilarity], result of:
          0.008615503 = score(doc=533,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16114321 = fieldWeight in 533, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=533)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 533) [ClassicSimilarity], result of:
              0.006315058 = score(doc=533,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=533)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Recently, a growing amount of systems that allow personal content annotation (tagging) are being created, ranging from personal sites for organising bookmarks (del.icio.us), photos (flickr.com) or videos (video.google.com, youtube.com) to systems for managing bibliographies for scientific research projects (citeulike.org, connotea.org). Simultaneously, a debate on the pro and cons of allowing users to add personal keywords to digital content has arisen. One recurrent point-of-discussion is whether tagging can solve the well-known vocabulary problem: In order to support successful retrieval in complex environments, it is necessary to index an object with a variety of aliases (cf. Furnas 1987). In this spirit, social tagging enhances the pool of rigid, traditional keywording by adding user-created retrieval vocabularies. Furthermore, tagging goes beyond simple personal content-based keywords by providing meta-keywords like funny or interesting that "identify qualities or characteristics" (Golder and Huberman 2006, Kipp and Campbell 2006, Kipp 2007, Feinberg 2006, Kroski 2005). Contrarily, tagging systems are claimed to lead to semantic difficulties that may hinder the precision and recall of tagging systems (e.g. the polysemy problem, cf. Marlow 2006, Lakoff 2005, Golder and Huberman 2006). Empirical research on social tagging is still rare and mostly from a computer linguistics or librarian point-of-view (Voß 2007) which focus either on the automatic statistical analyses of large data sets, or intellectually inspect single cases of tag usage: Some scientists studied the evolution of tag vocabularies and tag distribution in specific systems (Golder and Huberman 2006, Hammond 2005). Others concentrate on tagging behaviour and tagger characteristics in collaborative systems. (Hammond 2005, Kipp and Campbell 2007, Feinberg 2006, Sen 2006). However, little research has been conducted on the functional and linguistic characteristics of tags.1 An analysis of these patterns could show differences between user wording and conventional keywording. In order to provide a reasonable basis for comparison, a classification system for existing tags is needed.
    Therefore our main research questions are as follows: - Is it possible to discover regular patterns in tag usage and to establish a stable category model? - Does a specific tagging language comparable to internet slang or chatspeak evolve? - How do social tags differ from traditional (author / expert) keywords? - To what degree are social tags taken from or findable in the full text of the tagged resource? - Do tags in a research literature context go beyond simple content description (e.g. tags indicating time or task-related information, cf. Kipp et al. 2006)?
  20. Mayr, P.; Walter, A.-K.: Einsatzmöglichkeiten von Crosskonkordanzen (2007) 0.00
    0.0047055925 = product of:
      0.011763981 = sum of:
        0.005448922 = weight(_text_:a in 162) [ClassicSimilarity], result of:
          0.005448922 = score(doc=162,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 162) [ClassicSimilarity], result of:
              0.012630116 = score(doc=162,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=162)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    http://www.gesis.org/Information/Forschungsuebersichten/Tagungsberichte/Vernetzung/Mayr-Walter.pdf