Search (4892 results, page 1 of 245)

  • × year_i:[2010 TO 2020}
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.26
    0.2632501 = product of:
      0.78975034 = sum of:
        0.112821475 = product of:
          0.3384644 = sum of:
            0.3384644 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.3384644 = score(doc=973,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.3384644 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.3384644 = score(doc=973,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.3384644 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.3384644 = score(doc=973,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.33333334 = coord(3/9)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.24
    0.23947479 = product of:
      0.4310546 = sum of:
        0.056410737 = product of:
          0.1692322 = sum of:
            0.1692322 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1692322 = score(doc=400,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.1692322 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1692322 = score(doc=400,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.022076517 = weight(_text_:use in 400) [ClassicSimilarity], result of:
          0.022076517 = score(doc=400,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.20298971 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.01410293 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.01410293 = score(doc=400,freq=12.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1692322 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1692322 = score(doc=400,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5555556 = coord(5/9)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.22
    0.22239183 = product of:
      0.40030527 = sum of:
        0.037607156 = product of:
          0.11282147 = sum of:
            0.11282147 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.11282147 = score(doc=5820,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.028725008 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.028725008 = score(doc=5820,freq=8.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.15955365 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.15955365 = score(doc=5820,freq=4.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.014865796 = weight(_text_:of in 5820) [ClassicSimilarity], result of:
          0.014865796 = score(doc=5820,freq=30.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.26765788 = fieldWeight in 5820, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.15955365 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.15955365 = score(doc=5820,freq=4.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5555556 = coord(5/9)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.22
    0.2193751 = product of:
      0.6581253 = sum of:
        0.09401789 = product of:
          0.28205368 = sum of:
            0.28205368 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.28205368 = score(doc=1826,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.28205368 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.28205368 = score(doc=1826,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.28205368 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.28205368 = score(doc=1826,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.33333334 = coord(3/9)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.21
    0.21439835 = product of:
      0.385917 = sum of:
        0.021543756 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.021543756 = score(doc=563,freq=2.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1692322 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1692322 = score(doc=563,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016284661 = weight(_text_:of in 563) [ClassicSimilarity], result of:
          0.016284661 = score(doc=563,freq=16.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.2932045 = fieldWeight in 563, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1692322 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1692322 = score(doc=563,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.009624182 = product of:
          0.028872546 = sum of:
            0.028872546 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.028872546 = score(doc=563,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5555556 = coord(5/9)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
    Imprint
    Guelph, Ontario : University of Guelph
  6. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.20
    0.20300661 = product of:
      0.36541188 = sum of:
        0.047008947 = product of:
          0.14102684 = sum of:
            0.14102684 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.14102684 = score(doc=4997,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.14102684 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.14102684 = score(doc=4997,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.018397098 = weight(_text_:use in 4997) [ClassicSimilarity], result of:
          0.018397098 = score(doc=4997,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.1691581 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.017952153 = weight(_text_:of in 4997) [ClassicSimilarity], result of:
          0.017952153 = score(doc=4997,freq=28.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.32322758 = fieldWeight in 4997, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.14102684 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.14102684 = score(doc=4997,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5555556 = coord(5/9)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  7. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.18
    0.17835723 = product of:
      0.5350717 = sum of:
        0.056410737 = product of:
          0.1692322 = sum of:
            0.1692322 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.1692322 = score(doc=484,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.23933047 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.23933047 = score(doc=484,freq=4.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.23933047 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.23933047 = score(doc=484,freq=4.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.33333334 = coord(3/9)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  8. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.15
    0.15356258 = product of:
      0.4606877 = sum of:
        0.06581253 = product of:
          0.19743758 = sum of:
            0.19743758 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.19743758 = score(doc=5955,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.19743758 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.19743758 = score(doc=5955,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.19743758 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.19743758 = score(doc=5955,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.33333334 = coord(3/9)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  9. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.15
    0.15189187 = product of:
      0.3417567 = sum of:
        0.047008947 = product of:
          0.14102684 = sum of:
            0.14102684 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.14102684 = score(doc=855,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.14102684 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.14102684 = score(doc=855,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.012694089 = weight(_text_:of in 855) [ClassicSimilarity], result of:
          0.012694089 = score(doc=855,freq=14.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.22855641 = fieldWeight in 855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.14102684 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.14102684 = score(doc=855,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.44444445 = coord(4/9)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  10. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.11
    0.10968755 = product of:
      0.32906264 = sum of:
        0.047008947 = product of:
          0.14102684 = sum of:
            0.14102684 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.14102684 = score(doc=4388,freq=2.0), product of:
                0.30111524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035517205 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.14102684 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14102684 = score(doc=4388,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.14102684 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14102684 = score(doc=4388,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.33333334 = coord(3/9)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  11. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.10
    0.100285746 = product of:
      0.45128587 = sum of:
        0.22564293 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.22564293 = score(doc=2188,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.22564293 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.22564293 = score(doc=2188,freq=2.0), product of:
            0.30111524 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.035517205 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.22222222 = coord(2/9)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  12. Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S.: Web Information Retrieval (2013) 0.09
    0.087810054 = product of:
      0.15805809 = sum of:
        0.032115534 = weight(_text_:retrieval in 1082) [ClassicSimilarity], result of:
          0.032115534 = score(doc=1082,freq=10.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.29892567 = fieldWeight in 1082, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.014717679 = weight(_text_:use in 1082) [ClassicSimilarity], result of:
          0.014717679 = score(doc=1082,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.13532647 = fieldWeight in 1082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.014865796 = weight(_text_:of in 1082) [ClassicSimilarity], result of:
          0.014865796 = score(doc=1082,freq=30.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.26765788 = fieldWeight in 1082, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.08994296 = weight(_text_:compact in 1082) [ClassicSimilarity], result of:
          0.08994296 = score(doc=1082,freq=2.0), product of:
            0.26885647 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.035517205 = queryNorm
            0.33453897 = fieldWeight in 1082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.0064161215 = product of:
          0.019248364 = sum of:
            0.019248364 = weight(_text_:22 in 1082) [ClassicSimilarity], result of:
              0.019248364 = score(doc=1082,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.15476047 = fieldWeight in 1082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1082)
          0.33333334 = coord(1/3)
      0.5555556 = coord(5/9)
    
    Abstract
    With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
    Date
    16.10.2013 19:22:44
  13. Osiñska, V.: Visual analysis of classification scheme (2010) 0.05
    0.04547818 = product of:
      0.13643454 = sum of:
        0.015912883 = weight(_text_:of in 4068) [ClassicSimilarity], result of:
          0.015912883 = score(doc=4068,freq=22.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.28651062 = fieldWeight in 4068, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.1124287 = weight(_text_:compact in 4068) [ClassicSimilarity], result of:
          0.1124287 = score(doc=4068,freq=2.0), product of:
            0.26885647 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.035517205 = queryNorm
            0.4181737 = fieldWeight in 4068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.008092954 = product of:
          0.02427886 = sum of:
            0.02427886 = weight(_text_:29 in 4068) [ClassicSimilarity], result of:
              0.02427886 = score(doc=4068,freq=2.0), product of:
                0.12493842 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.035517205 = queryNorm
                0.19432661 = fieldWeight in 4068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4068)
          0.33333334 = coord(1/3)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper proposes a novel methodology to visualize a classification scheme. It is demonstrated with the Association for Computing Machinery (ACM) Computing Classification System (CCS). The collection derived from the ACM digital library, containing 37,543 documents classified by CCS. The assigned classes, subject descriptors, and keywords were processed in a dataset to produce a graphical representation of the documents. The general conception is based on the similarity of co-classes (themes) proportional to the number of common publications. The final number of all possible classes and subclasses in the collection was 353 and therefore the similarity matrix of co-classes had the same dimension. A spherical surface was chosen as the target information space. Classes and documents' node locations on the sphere were obtained by means of Multidimensional Scaling coordinates. By representing the surface on a plane like a map projection, it is possible to analyze the visualization layout. The graphical patterns were organized in some colour clusters. For evaluation of given visualization maps, graphics filtering was applied. This proposed method can be very useful in interdisciplinary research fields. It allows for a great amount of heterogeneous information to be conveyed in a compact display, including topics, relationships among topics, frequency of occurrence, importance and changes of these properties over time.
    Date
    6. 1.2011 19:29:12
  14. Freeborn, R.B.: Planning, implementing, and assessing a CD reclassification project (2017) 0.04
    0.0429294 = product of:
      0.19318229 = sum of:
        0.013296372 = weight(_text_:of in 5161) [ClassicSimilarity], result of:
          0.013296372 = score(doc=5161,freq=6.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.23940048 = fieldWeight in 5161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5161)
        0.17988592 = weight(_text_:compact in 5161) [ClassicSimilarity], result of:
          0.17988592 = score(doc=5161,freq=2.0), product of:
            0.26885647 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.035517205 = queryNorm
            0.66907793 = fieldWeight in 5161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0625 = fieldNorm(doc=5161)
      0.22222222 = coord(2/9)
    
    Abstract
    In 2015, plans were put in place to relocate the entire compact disc collection of the Music and Media Center in the Arts and Humanities Library at Penn State's University Park campus, and to reclassify them from an accession number system to a more user-browsable one based on the Library of Congress Classification scheme. This article looks at the path that has being taken to reach this goal, and provides an initial assessment of the project at the halfway point.
  15. Hjoerland, B.: ¬The importance of theories of knowledge : indexing and information retrieval as an example (2011) 0.04
    0.040909145 = product of:
      0.092045575 = sum of:
        0.03731488 = weight(_text_:retrieval in 4359) [ClassicSimilarity], result of:
          0.03731488 = score(doc=4359,freq=6.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.34732026 = fieldWeight in 4359, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4359)
        0.022076517 = weight(_text_:use in 4359) [ClassicSimilarity], result of:
          0.022076517 = score(doc=4359,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.20298971 = fieldWeight in 4359, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=4359)
        0.02302999 = weight(_text_:of in 4359) [ClassicSimilarity], result of:
          0.02302999 = score(doc=4359,freq=32.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.41465378 = fieldWeight in 4359, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4359)
        0.009624182 = product of:
          0.028872546 = sum of:
            0.028872546 = weight(_text_:22 in 4359) [ClassicSimilarity], result of:
              0.028872546 = score(doc=4359,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.23214069 = fieldWeight in 4359, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4359)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    A recent study in information science (IS), raises important issues concerning the value of human indexing and basic theories of indexing and information retrieval, as well as the use of quantitative and qualitative approaches in IS and the underlying theories of knowledge informing the field. The present article uses L&E as the point of departure for demonstrating in what way more social and interpretative understandings may provide fruitful improvements for research in indexing, knowledge organization, and information retrieval. The artcle is motivated by the observation that philosophical contributions tend to be ignored in IS if they are not directly formed as criticisms or invitations to dialogs. It is part of the author's ongoing publication of articles about philosophical issues in IS and it is intended to be followed by analyzes of other examples of contributions to core issues in IS. Although it is formulated as a criticism of a specific paper, it should be seen as part of a general discussion of the philosophical foundation of IS and as a support to the emerging social paradigm in this field.
    Date
    17. 3.2011 19:22:55
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.72-77
  16. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.04
    0.03880433 = product of:
      0.08730974 = sum of:
        0.03731488 = weight(_text_:retrieval in 3953) [ClassicSimilarity], result of:
          0.03731488 = score(doc=3953,freq=6.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.34732026 = fieldWeight in 3953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.022076517 = weight(_text_:use in 3953) [ClassicSimilarity], result of:
          0.022076517 = score(doc=3953,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.20298971 = fieldWeight in 3953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.018206805 = weight(_text_:of in 3953) [ClassicSimilarity], result of:
          0.018206805 = score(doc=3953,freq=20.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.32781258 = fieldWeight in 3953, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.009711544 = product of:
          0.029134631 = sum of:
            0.029134631 = weight(_text_:29 in 3953) [ClassicSimilarity], result of:
              0.029134631 = score(doc=3953,freq=2.0), product of:
                0.12493842 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.035517205 = queryNorm
                0.23319192 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
    Date
    29. 8.2010 12:40:35
  17. Fluhr, C.: Crosslingual access to photo databases (2012) 0.04
    0.038084555 = product of:
      0.08569025 = sum of:
        0.021543756 = weight(_text_:retrieval in 93) [ClassicSimilarity], result of:
          0.021543756 = score(doc=93,freq=2.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.20052543 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.038237654 = weight(_text_:use in 93) [ClassicSimilarity], result of:
          0.038237654 = score(doc=93,freq=6.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.35158852 = fieldWeight in 93, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.016284661 = weight(_text_:of in 93) [ClassicSimilarity], result of:
          0.016284661 = score(doc=93,freq=16.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.2932045 = fieldWeight in 93, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.009624182 = product of:
          0.028872546 = sum of:
            0.028872546 = weight(_text_:22 in 93) [ClassicSimilarity], result of:
              0.028872546 = score(doc=93,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.23214069 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=93)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    This paper is about search of photos in photo databases of agencies which sell photos over the Internet. The problem is far from the behavior of photo databases managed by librarians and also far from the corpora generally used for research purposes. The descriptions use mainly single words and it is well known that it is not the best way to have a good search. This increases the problem of semantic ambiguity. This problem of semantic ambiguity is crucial for cross-language querying. On the other hand, users are not aware of documentation techniques and use generally very simple queries but want to get precise answers. This paper gives the experience gained in a 3 year use (2006-2008) of a cross-language access to several of the main international commercial photo databases. The languages used were French, English, and German.
    Date
    17. 4.2012 14:25:22
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  18. Wlodarczyk, B.: Topic map as a method for the development of subject headings vocabulary : an introduction to the project of the National Library of Poland (2013) 0.04
    0.037995156 = product of:
      0.0854891 = sum of:
        0.02513438 = weight(_text_:retrieval in 1957) [ClassicSimilarity], result of:
          0.02513438 = score(doc=1957,freq=2.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.23394634 = fieldWeight in 1957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.025755936 = weight(_text_:use in 1957) [ClassicSimilarity], result of:
          0.025755936 = score(doc=1957,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.23682132 = fieldWeight in 1957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.02326865 = weight(_text_:of in 1957) [ClassicSimilarity], result of:
          0.02326865 = score(doc=1957,freq=24.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.41895083 = fieldWeight in 1957, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1957)
        0.011330134 = product of:
          0.0339904 = sum of:
            0.0339904 = weight(_text_:29 in 1957) [ClassicSimilarity], result of:
              0.0339904 = score(doc=1957,freq=2.0), product of:
                0.12493842 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.035517205 = queryNorm
                0.27205724 = fieldWeight in 1957, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1957)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    Subject searches in the National Library of Poland catalog are still comprised of a significant number of all searches, but understanding and exploration of the National Library of Poland Subject Headings causes many problems, not only for the end-users, but also for many librarians. Another problem in the National Library of Poland is the insufficient use of relationships between the terms. The solution could be a properly designed Web application based on a topic map using appropriate visualization that supports indexing and information retrieval in the National Library of Poland. The article presents the main stages of a planned project.
    Date
    29. 5.2015 19:16:59
  19. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.04
    0.03791122 = product of:
      0.085300244 = sum of:
        0.03731488 = weight(_text_:retrieval in 3697) [ClassicSimilarity], result of:
          0.03731488 = score(doc=3697,freq=6.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.34732026 = fieldWeight in 3697, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.022076517 = weight(_text_:use in 3697) [ClassicSimilarity], result of:
          0.022076517 = score(doc=3697,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.20298971 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.016284661 = weight(_text_:of in 3697) [ClassicSimilarity], result of:
          0.016284661 = score(doc=3697,freq=16.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.2932045 = fieldWeight in 3697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.009624182 = product of:
          0.028872546 = sum of:
            0.028872546 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.028872546 = score(doc=3697,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
    Theme
    Klassifikationssysteme im Online-Retrieval
  20. Hjoerland, B.: Classical databases and knowledge organisation : a case for Boolean retrieval and human decision-making during search (2014) 0.04
    0.037683114 = product of:
      0.084787 = sum of:
        0.043976005 = weight(_text_:retrieval in 1398) [ClassicSimilarity], result of:
          0.043976005 = score(doc=1398,freq=12.0), product of:
            0.10743652 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.035517205 = queryNorm
            0.40932083 = fieldWeight in 1398, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1398)
        0.018397098 = weight(_text_:use in 1398) [ClassicSimilarity], result of:
          0.018397098 = score(doc=1398,freq=2.0), product of:
            0.10875683 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.035517205 = queryNorm
            0.1691581 = fieldWeight in 1398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1398)
        0.014393743 = weight(_text_:of in 1398) [ClassicSimilarity], result of:
          0.014393743 = score(doc=1398,freq=18.0), product of:
            0.05554029 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035517205 = queryNorm
            0.25915858 = fieldWeight in 1398, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1398)
        0.008020152 = product of:
          0.024060456 = sum of:
            0.024060456 = weight(_text_:22 in 1398) [ClassicSimilarity], result of:
              0.024060456 = score(doc=1398,freq=2.0), product of:
                0.1243752 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035517205 = queryNorm
                0.19345059 = fieldWeight in 1398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1398)
          0.33333334 = coord(1/3)
      0.44444445 = coord(4/9)
    
    Abstract
    This paper considers classical bibliographic databases based on the Boolean retrieval model (for example MEDLINE and PsycInfo). This model is challenged by modern search engines and information retrieval (IR) researchers, who often consider Boolean retrieval as a less efficient approach. This speech examines this claim and argues for the continued value of Boolean systems, which implies two further issues: (1) the important role of human expertise in searching (expert searchers and "information literacy") and (2) the role of knowledge organization (KO) in the design and use of classical databases, including controlled vocabularies and human indexing. An underlying issue is the kind of retrieval system for which one should aim. It is suggested that Julian Warner's (2010) differentiation between the computer science traditions, aiming at automatically transforming queries into (ranked) sets of relevant documents, and an older library-orientated tradition aiming at increasing the "selection power" of users seems important. The Boolean retrieval model is important in order to provide users with the power to make informed searches and have full control over what is found and what is not found. These issues may also have important implications for the maintenance of information science and KO as research fields as well as for the information profession as a profession in its own right.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Languages

Types

  • a 4391
  • el 398
  • m 297
  • s 92
  • x 55
  • r 20
  • b 7
  • n 7
  • i 4
  • ag 2
  • p 2
  • v 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications