Search (16 results, page 1 of 1)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.08
    0.07827899 = product of:
      0.11741848 = sum of:
        0.06857903 = weight(_text_:subject in 3870) [ClassicSimilarity], result of:
          0.06857903 = score(doc=3870,freq=10.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.4418043 = fieldWeight in 3870, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
        0.048839446 = product of:
          0.09767889 = sum of:
            0.09767889 = weight(_text_:headings in 3870) [ClassicSimilarity], result of:
              0.09767889 = score(doc=3870,freq=6.0), product of:
                0.21048847 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.043400183 = queryNorm
                0.46405816 = fieldWeight in 3870, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3870)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  2. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.04
    0.04440023 = product of:
      0.066600345 = sum of:
        0.034698553 = weight(_text_:subject in 3965) [ClassicSimilarity], result of:
          0.034698553 = score(doc=3965,freq=4.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.22353725 = fieldWeight in 3965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.03190179 = product of:
          0.06380358 = sum of:
            0.06380358 = weight(_text_:headings in 3965) [ClassicSimilarity], result of:
              0.06380358 = score(doc=3965,freq=4.0), product of:
                0.21048847 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.043400183 = queryNorm
                0.3031215 = fieldWeight in 3965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
  3. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.04
    0.03817102 = product of:
      0.057256527 = sum of:
        0.034698553 = weight(_text_:subject in 1256) [ClassicSimilarity], result of:
          0.034698553 = score(doc=1256,freq=4.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.22353725 = fieldWeight in 1256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
        0.022557972 = product of:
          0.045115944 = sum of:
            0.045115944 = weight(_text_:headings in 1256) [ClassicSimilarity], result of:
              0.045115944 = score(doc=1256,freq=2.0), product of:
                0.21048847 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.043400183 = queryNorm
                0.21433927 = fieldWeight in 1256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
  4. Final Report to the ALCTS CCS SAC Subcommittee on Metadata and Subject Analysis (2001) 0.03
    0.02833125 = product of:
      0.08499375 = sum of:
        0.08499375 = weight(_text_:subject in 5016) [ClassicSimilarity], result of:
          0.08499375 = score(doc=5016,freq=6.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.5475522 = fieldWeight in 5016, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=5016)
      0.33333334 = coord(1/3)
    
    Abstract
    The charge for the SAC Subcommittee on Metadata and Subject Analysis states: Identify and study the major issues surrounding the use of metadata in the subject analysis and classification of digital resources. Provide discussion forums and programs relevant to these issues. Discussion forums should begin by Annual 1998. The continued need for the subcommittee should be reexamined by SAC no later than 2001.
  5. Howarth, L.C.: Metadata schemes for subject gateways (2003) 0.02
    0.024535581 = product of:
      0.073606744 = sum of:
        0.073606744 = weight(_text_:subject in 1747) [ClassicSimilarity], result of:
          0.073606744 = score(doc=1747,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.4741941 = fieldWeight in 1747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.09375 = fieldNorm(doc=1747)
      0.33333334 = coord(1/3)
    
  6. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.01
    0.014312423 = product of:
      0.042937268 = sum of:
        0.042937268 = weight(_text_:subject in 5236) [ClassicSimilarity], result of:
          0.042937268 = score(doc=5236,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.27661324 = fieldWeight in 5236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5236)
      0.33333334 = coord(1/3)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
  7. Baca, M.; O'Keefe, E.: Sharing standards and expertise in the early 21st century : Moving toward a collaborative, "cross-community" model for metadata creation (2008) 0.01
    0.012267791 = product of:
      0.036803372 = sum of:
        0.036803372 = weight(_text_:subject in 2321) [ClassicSimilarity], result of:
          0.036803372 = score(doc=2321,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.23709705 = fieldWeight in 2321, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2321)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper provides a brief overview of the evolving descriptive metadata landscape, one phenomenon of which can be characterized as "cross-community" metadata as manifested in records that are the result of a combination of carefully considered data value and data content standards. he online catalog of the Morgan Library & Museum provides a real-life illustration of how diverse data content standards and vocabulary tools can be integrated within the classic data structure/technical interchange format of MARC21 to better describe unique, museum-type objects, and to provide better end-user access and understanding. The Morgan experience also shows the value of developing a collaborative model for metadata creation that combines the subject expertise of curators and scholars with the cataloging expertise and knowledge of standards possessed by librarians.
  8. Bartczak, J.; Glendon, I.: Python, Google Sheets, and the Thesaurus for Graphic Materials for efficient metadata project workflows (2017) 0.01
    0.012267791 = product of:
      0.036803372 = sum of:
        0.036803372 = weight(_text_:subject in 3893) [ClassicSimilarity], result of:
          0.036803372 = score(doc=3893,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.23709705 = fieldWeight in 3893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2017, the University of Virginia (U.Va.) will launch a two year initiative to celebrate the bicentennial anniversary of the University's founding in 1819. The U.Va. Library is participating in this event by digitizing some 20,000 photographs and negatives that document student life on the U.Va. grounds in the 1960s and 1970s. Metadata librarians and archivists are well-versed in the challenges associated with generating digital content and accompanying description within the context of limited resources. This paper describes how technology and new approaches to metadata design have enabled the University of Virginia's Metadata Analysis and Design Department to rapidly and successfully generate accurate description for these digital objects. Python's pandas module improves efficiency by cleaning and repurposing data recorded at digitization, while the lxml module builds MODS XML programmatically from CSV tables. A simplified technique for subject heading selection and assignment in Google Sheets provides a collaborative environment for streamlined metadata creation and data quality control.
  9. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.011760252 = product of:
      0.035280753 = sum of:
        0.035280753 = product of:
          0.070561506 = sum of:
            0.070561506 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.070561506 = score(doc=6048,freq=2.0), product of:
                0.15198004 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043400183 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  10. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.01
    0.01022316 = product of:
      0.030669477 = sum of:
        0.030669477 = weight(_text_:subject in 1176) [ClassicSimilarity], result of:
          0.030669477 = score(doc=1176,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.19758089 = fieldWeight in 1176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1176)
      0.33333334 = coord(1/3)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  11. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.01
    0.0081785275 = product of:
      0.024535581 = sum of:
        0.024535581 = weight(_text_:subject in 1177) [ClassicSimilarity], result of:
          0.024535581 = score(doc=1177,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.15806471 = fieldWeight in 1177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=1177)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  12. Understanding metadata (2004) 0.01
    0.007840168 = product of:
      0.023520501 = sum of:
        0.023520501 = product of:
          0.047041003 = sum of:
            0.047041003 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.047041003 = score(doc=2686,freq=2.0), product of:
                0.15198004 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043400183 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2004 10:22:40
  13. Baker, T.: Languages for Dublin Core (1998) 0.01
    0.0071562114 = product of:
      0.021468634 = sum of:
        0.021468634 = weight(_text_:subject in 1257) [ClassicSimilarity], result of:
          0.021468634 = score(doc=1257,freq=2.0), product of:
            0.15522492 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.043400183 = queryNorm
            0.13830662 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  14. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.005880126 = product of:
      0.017640376 = sum of:
        0.017640376 = product of:
          0.035280753 = sum of:
            0.035280753 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.035280753 = score(doc=266,freq=2.0), product of:
                0.15198004 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043400183 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 5.2021 12:43:05
  15. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.00
    0.004900105 = product of:
      0.014700314 = sum of:
        0.014700314 = product of:
          0.029400628 = sum of:
            0.029400628 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.029400628 = score(doc=4550,freq=2.0), product of:
                0.15198004 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043400183 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10.11.2018 16:27:22
  16. Baker, T.: ¬A grammar of Dublin Core (2000) 0.00
    0.003920084 = product of:
      0.011760251 = sum of:
        0.011760251 = product of:
          0.023520501 = sum of:
            0.023520501 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.023520501 = score(doc=1236,freq=2.0), product of:
                0.15198004 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043400183 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    26.12.2011 14:01:22