Search (308 results, page 1 of 16)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.18
    0.18218441 = product of:
      0.91092205 = sum of:
        0.05693263 = product of:
          0.17079788 = sum of:
            0.17079788 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.17079788 = score(doc=973,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.17079788 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.17079788 = score(doc=973,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.17079788 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.17079788 = score(doc=973,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.17079788 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.17079788 = score(doc=973,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.17079788 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.17079788 = score(doc=973,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.17079788 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.17079788 = score(doc=973,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.2 = coord(6/30)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.10271613 = product of:
      0.38518548 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.07116579 = score(doc=855,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.0045597646 = product of:
          0.009119529 = sum of:
            0.009119529 = weight(_text_:online in 855) [ClassicSimilarity], result of:
              0.009119529 = score(doc=855,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16765618 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.5 = coord(1/2)
        0.0010748099 = product of:
          0.0032244297 = sum of:
            0.0032244297 = weight(_text_:a in 855) [ClassicSimilarity], result of:
              0.0032244297 = score(doc=855,freq=12.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15602624 = fieldWeight in 855, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.26666668 = coord(8/30)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.09862116 = product of:
      0.4226621 = sum of:
        0.018977545 = product of:
          0.056932632 = sum of:
            0.056932632 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.056932632 = score(doc=5820,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0011100589 = product of:
          0.0033301765 = sum of:
            0.0033301765 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
              0.0033301765 = score(doc=5820,freq=20.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16114321 = fieldWeight in 5820, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09611896 = product of:
      0.36044607 = sum of:
        0.018977545 = product of:
          0.056932632 = sum of:
            0.056932632 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.056932632 = score(doc=701,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.055316057 = weight(_text_:wirtschaftswissenschaften in 701) [ClassicSimilarity], result of:
          0.055316057 = score(doc=701,freq=6.0), product of:
            0.11380646 = queryWeight, product of:
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.017922899 = queryNorm
            0.48605376 = fieldWeight in 701, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0014893002 = product of:
          0.0044679004 = sum of:
            0.0044679004 = weight(_text_:a in 701) [ClassicSimilarity], result of:
              0.0044679004 = score(doc=701,freq=36.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.2161963 = fieldWeight in 701, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.26666668 = coord(8/30)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Footnote
    Zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) von der Fakultaet fuer Wirtschaftswissenschaften der Universitaet Fridericiana zu Karlsruhe genehmigte Dissertation.
    Imprint
    Karlsruhe : Fakultaet fuer Wirtschaftswissenschaften der Universitaet Fridericiana zu Karlsruhe
  5. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.09
    0.08962582 = product of:
      0.38411066 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.07116579 = score(doc=1000,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0045597646 = product of:
          0.009119529 = sum of:
            0.009119529 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.009119529 = score(doc=1000,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.23333333 = coord(7/30)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  6. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.09
    0.08891655 = product of:
      0.3810709 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.07116579 = score(doc=4997,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.0015200109 = product of:
          0.0045600324 = sum of:
            0.0045600324 = weight(_text_:a in 4997) [ClassicSimilarity], result of:
              0.0045600324 = score(doc=4997,freq=24.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.22065444 = fieldWeight in 4997, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  7. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.08770639 = product of:
      0.43853194 = sum of:
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.011537234 = product of:
          0.017305851 = sum of:
            0.0027360192 = weight(_text_:a in 563) [ClassicSimilarity], result of:
              0.0027360192 = score(doc=563,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.13239266 = fieldWeight in 563, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.014569832 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.014569832 = score(doc=563,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.6666667 = coord(2/3)
      0.2 = coord(6/30)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  8. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.08
    0.07591018 = product of:
      0.3795509 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.07116579 = score(doc=4388,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2 = coord(6/30)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  9. Thiel, A.: Neue Medien und Technologien im bibliothekarischen Informationsdienst : Analyse der Fachliteratur sowie Untersuchung der 'Academic American Encyclopedia' (CD-ROM) und von 'Meyers Bildschirm-Lexikon' (Btx) (1988) 0.01
    0.0108660115 = product of:
      0.10866012 = sum of:
        0.046018336 = weight(_text_:neue in 6846) [ClassicSimilarity], result of:
          0.046018336 = score(doc=6846,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.6301992 = fieldWeight in 6846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.109375 = fieldNorm(doc=6846)
        0.061413173 = weight(_text_:medien in 6846) [ClassicSimilarity], result of:
          0.061413173 = score(doc=6846,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.7280198 = fieldWeight in 6846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.109375 = fieldNorm(doc=6846)
        0.00122861 = product of:
          0.00368583 = sum of:
            0.00368583 = weight(_text_:a in 6846) [ClassicSimilarity], result of:
              0.00368583 = score(doc=6846,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 6846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6846)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
  10. Gellner, S.: Öffentliche Bibliotheken: Dienstleistungspartner für Industrie und Handel? : Möglichkeiten der Informationsarbeit an ausgewählten Beispielen (1992) 0.01
    0.010854263 = product of:
      0.16281393 = sum of:
        0.071250714 = product of:
          0.14250143 = sum of:
            0.14250143 = weight(_text_:industrie in 3779) [ClassicSimilarity], result of:
              0.14250143 = score(doc=3779,freq=2.0), product of:
                0.12019911 = queryWeight, product of:
                  6.7064548 = idf(docFreq=146, maxDocs=44218)
                  0.017922899 = queryNorm
                1.1855448 = fieldWeight in 3779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7064548 = idf(docFreq=146, maxDocs=44218)
                  0.125 = fieldNorm(doc=3779)
          0.5 = coord(1/2)
        0.09156321 = product of:
          0.18312642 = sum of:
            0.18312642 = weight(_text_:handel in 3779) [ClassicSimilarity], result of:
              0.18312642 = score(doc=3779,freq=2.0), product of:
                0.1362596 = queryWeight, product of:
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.017922899 = queryNorm
                1.3439524 = fieldWeight in 3779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.125 = fieldNorm(doc=3779)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
  11. Stanz, G.: Medienarchive: Analyse einer unterschätzten Ressource : Archivierung, Dokumentation, und Informationsvermittlung in Medien bei besonderer Berücksichtigung von Pressearchiven (1994) 0.01
    0.0106371185 = product of:
      0.10637118 = sum of:
        0.014702087 = product of:
          0.029404175 = sum of:
            0.029404175 = weight(_text_:29 in 9) [ClassicSimilarity], result of:
              0.029404175 = score(doc=9,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46638384 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.5 = coord(1/2)
        0.052639864 = weight(_text_:medien in 9) [ClassicSimilarity], result of:
          0.052639864 = score(doc=9,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.62401694 = fieldWeight in 9, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.09375 = fieldNorm(doc=9)
        0.039029226 = product of:
          0.05854384 = sum of:
            0.029404175 = weight(_text_:29 in 9) [ClassicSimilarity], result of:
              0.029404175 = score(doc=9,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46638384 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
            0.029139664 = weight(_text_:22 in 9) [ClassicSimilarity], result of:
              0.029139664 = score(doc=9,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46428138 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Date
    22. 2.1997 19:50:29
  12. Parsian, D.: Überlegungen zur Aufstellungssystematik und Reklassifikation an der Fachbereichsbibliothek Afrikawissenschaften und Orientalistik (2007) 0.01
    0.00948861 = product of:
      0.094886094 = sum of:
        0.047070585 = product of:
          0.09414117 = sum of:
            0.09414117 = weight(_text_:c3 in 3396) [ClassicSimilarity], result of:
              0.09414117 = score(doc=3396,freq=2.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5386705 = fieldWeight in 3396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3396)
          0.5 = coord(1/2)
        0.031380393 = product of:
          0.09414117 = sum of:
            0.09414117 = weight(_text_:c3 in 3396) [ClassicSimilarity], result of:
              0.09414117 = score(doc=3396,freq=2.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5386705 = fieldWeight in 3396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3396)
          0.33333334 = coord(1/3)
        0.01643512 = weight(_text_:neue in 3396) [ClassicSimilarity], result of:
          0.01643512 = score(doc=3396,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.22507115 = fieldWeight in 3396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3396)
      0.1 = coord(3/30)
    
    Abstract
    Der praktische Einsatz der Dewey-Dezimalklassifikation (DDC) für die inhaltliche Erschließung sowie als Aufstellungssystematik in wissenschaftlichen Bibliotheken des deutschen Sprachraums hat wenig Tradition und wurde bisher von der Literatur kaum aufgearbeitet. Nach einer Darstellung der Rahmenbedingungen und der Problemlage in der Fachbereichsbibliothek Afrikanistik/Orientalistik der Universität Wien, gibt der Autor einen Überblick über die Erfahrungen mit und die Einschätzung von DDC in vergleichbaren wissenschaftlichen Bibliotheken vor allem im deutschen und englischen Sprachraum, definiert Kriterien für eine neue Systematik und klärt inwieweit diese mit dem Einsatz von DDC erfüllbar sind. Ausgehend von den quantitativen und räumlichen Rahmenbedingungen und der Segmentierung des Bestandes im Hinblick auf die Erfordernisse der Reklassifikation, sowie auf der Basis eigener Erfahrungen und Plausibilitätsprüfungen schätzt der Autor anhand von drei Varianten den nötigen Personal- und Zeitaufwand für den Einsatz von DDC im Rahmen eines Reklassifizierungsprojektes. Abschließend enthält die vorliegende Arbeit praktische Erfahrungen im Umgang mit der DDC am Beispiel des Themenbereiches "Islamwissenschaft", durch die auf einige Besonderheiten und Probleme bei der Verwendung von DDC für die Reklassifizierung hingewiesen wird.
    Footnote
    Vgl. unter: http://othes.univie.ac.at/3016/1/Parsian_%C3%9Cberlegungen_zur_Aufstellungssystematik_und_Reklassifikation_an_der_AFOR.pdf.
  13. Quosig, D.: Umsetzung des Lehrbuches "Wirtschaftsinformation" in ein Online-Tutorial (2004) 0.01
    0.008303063 = product of:
      0.12454593 = sum of:
        0.11177859 = weight(_text_:wirtschaftswissenschaften in 4527) [ClassicSimilarity], result of:
          0.11177859 = score(doc=4527,freq=2.0), product of:
            0.11380646 = queryWeight, product of:
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.017922899 = queryNorm
            0.98218143 = fieldWeight in 4527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.109375 = fieldNorm(doc=4527)
        0.012767341 = product of:
          0.025534682 = sum of:
            0.025534682 = weight(_text_:online in 4527) [ClassicSimilarity], result of:
              0.025534682 = score(doc=4527,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46943733 = fieldWeight in 4527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4527)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Field
    Wirtschaftswissenschaften
  14. Hartmann, S.: Inhaltliche Erschließung einer Call-Center-Datenbank : Konzeptentwicklung für die Kundentelefon-Wissensdatenbank der Deutschen Post AG (KT-WEB 2.0) (2005) 0.01
    0.0060773725 = product of:
      0.09116058 = sum of:
        0.02503327 = product of:
          0.05006654 = sum of:
            0.05006654 = weight(_text_:dienstleistungen in 3053) [ClassicSimilarity], result of:
              0.05006654 = score(doc=3053,freq=2.0), product of:
                0.10771505 = queryWeight, product of:
                  6.009912 = idf(docFreq=294, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46480542 = fieldWeight in 3053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.009912 = idf(docFreq=294, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3053)
          0.5 = coord(1/2)
        0.066127315 = weight(_text_:post in 3053) [ClassicSimilarity], result of:
          0.066127315 = score(doc=3053,freq=4.0), product of:
            0.10409636 = queryWeight, product of:
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.017922899 = queryNorm
            0.635251 = fieldWeight in 3053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3053)
      0.06666667 = coord(2/30)
    
    Abstract
    Die Qualität der Auskünfte in einem Call-Center des Typus Informationshotline wird gewährleistet durch die zentrale Bereitstellung von Informationen zu Produkten und Dienstleistungen eines Unternehmens. Dies kann in Form von Datenbanken, wie die in dieser Arbeit analysierten Kundentelefon-Wissensdatenbank der Deutschen Post AG (KT-WEB), realisiert werden. Damit die Call-Center-Mitarbeiter präzise, schnell und zuverlässig auf die im Kundengespräch relevanten Informationen zugreifen können, müssen die Informationen inhaltlich entsprechend erschlossen sein. Für KT-WEB wird dazu ein Konzept zur Optimierung der inhaltlichen Erschließung und Suche entwickelt, das über die bisher eingesetzten Erschließungsmethoden, Systematik und Volltexterschließung, hinaus geht: eine Erschließung mit facettierten Schlagwörtern.
  15. Closhen, J.: Entwurf und Implementierung eines Informationssystems für audiovisuellen Medien aus dem Bereich der U-Musik unter Verwendung des relationalen Datanbankmanagementsystems INGRES (1991) 0.01
    0.0060758544 = product of:
      0.09113781 = sum of:
        0.061413173 = weight(_text_:medien in 2755) [ClassicSimilarity], result of:
          0.061413173 = score(doc=2755,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.7280198 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.109375 = fieldNorm(doc=2755)
        0.029724635 = weight(_text_:u in 2755) [ClassicSimilarity], result of:
          0.029724635 = score(doc=2755,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.50648975 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=2755)
      0.06666667 = coord(2/30)
    
  16. Schlombs, U.: Sacherschließung in Staatlichen Allgemeinbibliotheken der DDR : kritische Darstellung der einzelnen Erschließungsverfahren und der zentralen Dienste (1982) 0.01
    0.005984619 = product of:
      0.089769274 = sum of:
        0.055798266 = product of:
          0.11159653 = sum of:
            0.11159653 = weight(_text_:dienste in 6815) [ClassicSimilarity], result of:
              0.11159653 = score(doc=6815,freq=2.0), product of:
                0.106369466 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.017922899 = queryNorm
                1.0491407 = fieldWeight in 6815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.125 = fieldNorm(doc=6815)
          0.5 = coord(1/2)
        0.03397101 = weight(_text_:u in 6815) [ClassicSimilarity], result of:
          0.03397101 = score(doc=6815,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.57884544 = fieldWeight in 6815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.125 = fieldNorm(doc=6815)
      0.06666667 = coord(2/30)
    
  17. Riebe, U.: John R. Searles Position zum Leib-Seele-Problem (2008) 0.01
    0.005271801 = product of:
      0.039538503 = sum of:
        0.013148096 = weight(_text_:neue in 4567) [ClassicSimilarity], result of:
          0.013148096 = score(doc=4567,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.18005691 = fieldWeight in 4567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.03125 = fieldNorm(doc=4567)
        0.017546622 = weight(_text_:medien in 4567) [ClassicSimilarity], result of:
          0.017546622 = score(doc=4567,freq=2.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.20800565 = fieldWeight in 4567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.03125 = fieldNorm(doc=4567)
        0.008492753 = weight(_text_:u in 4567) [ClassicSimilarity], result of:
          0.008492753 = score(doc=4567,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.14471136 = fieldWeight in 4567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=4567)
        3.5103143E-4 = product of:
          0.0010530943 = sum of:
            0.0010530943 = weight(_text_:a in 4567) [ClassicSimilarity], result of:
              0.0010530943 = score(doc=4567,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.050957955 = fieldWeight in 4567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4567)
          0.33333334 = coord(1/3)
      0.13333334 = coord(4/30)
    
    Abstract
    Wenig ist heute für den gebildeten Bürger interessanter, als die aktuellen Erkenntnisse der Neurowissenschaften zu verfolgen. Letztere ermöglichen durch bildgebende Verfahren wie z. B. EEG, fMRT oder MEG, dem Menschen "beim Denken zuzusehen". So heißt es zumindest in den Medien. Aktuelle Forschungsberichte zeigen eine Näherung an diese Sichtweise. Kalifornischen Forschern ist es durch eine Hirnmessung jüngst gelungen, mit groer Wahrscheinlichkeit zu erkennen, welches Bild eine Versuchsperson gerade betrachtet. Dazu wurden der Versuchsperson erst 1.750 Bilder mit Naturmotiven gezeigt und die jeweilige Stimulation im Hirn per fMRT gemessen. Geachtet wurde speziell auf visuelle Areale, die in eine dreidimensionale Matrix transformiert wurden. Die einzelnen Segmente heissen Voxel (analog zu zweidimensionalen Pixeln). So entstand eine Datenbank aus Voxel-Aktivitätsmustern. Im folgenden Durchlauf wurden der Versuchsperson 120 neue Bilder gezeigt und anhand der Datenbank die wahrscheinliche Voxel-Aktivität berechnet. Vorausgesagt wurde dann das Bild, dessen tatsächliches Voxel-Muster mit dem berechneten am meisten übereinstimmte. Bei Versuchsperson A wurde eine Trefferquote von 92% erreicht, bei Versuchsperson B immerhin 72%. Die Forscher folgern optimistisch, dass es über ihren Ansatz möglich sein wird, gesehene Bildeindrücke über Hirnmessungen zu rekonstruieren. Hier wird versucht auf Kants Frage "Was ist der Mensch?" auf materialistische Weise näher zu kommen. Im Bezug auf frühere Experimente von Benjamin Libet schließen heutzutage einige Hirnforscher, dass das bewusste Erleben eines Menschen nur Beiwerk von deterministisch ablaufenden Hirnprozessen ist, weil das Erleben neuronaler Aktivität zeitlich hinterherhinkt. Auch wird gefolgert, dass empfundene Willensfreiheit nur eine Illusion ist, obwohl Libet diese harte Schlussfolgerung nicht zieht. Die Ergebnisse solcher Studien sind zwar hochinteressant, doch muss man bei der Interpretation auch hohe Sorgfalt walten lassen, insbesondere wenn es um das Thema Bewusstsein geht. Von philosophischer Seite her hat sich John Searle intensiv mit dem Thema auseinandergesetzt und eine Theorie entwickelt, die alle bisherigen philosophischen Modelle verwirft.
  18. Sommer, M.: Automatische Generierung von DDC-Notationen für Hochschulveröffentlichungen (2012) 0.00
    0.004947375 = product of:
      0.049473748 = sum of:
        0.0073510436 = product of:
          0.014702087 = sum of:
            0.014702087 = weight(_text_:29 in 587) [ClassicSimilarity], result of:
              0.014702087 = score(doc=587,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23319192 = fieldWeight in 587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=587)
          0.5 = coord(1/2)
        0.037222005 = weight(_text_:medien in 587) [ClassicSimilarity], result of:
          0.037222005 = score(doc=587,freq=4.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.44124663 = fieldWeight in 587, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.0049006958 = product of:
          0.014702087 = sum of:
            0.014702087 = weight(_text_:29 in 587) [ClassicSimilarity], result of:
              0.014702087 = score(doc=587,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23319192 = fieldWeight in 587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=587)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Content
    Vgl. unter: http://opus.bsz-bw.de/fhhv/volltexte/2012/397/pdf/Bachelorarbeit_final_Korrektur01.pdf. Bachelorarbeit, Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation, Studiengang Informationsmanagement
    Date
    29. 1.2013 15:44:43
    Imprint
    Hannover : Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation
  19. Waldhör, A.: Erstellung einer Konkordanz zwischen Basisklassifikation (BK) und Regensburger Verbundklassifikation (RVK) für den Fachbereich Recht (2012) 0.00
    0.0046189614 = product of:
      0.046189614 = sum of:
        0.00612587 = product of:
          0.01225174 = sum of:
            0.01225174 = weight(_text_:29 in 596) [ClassicSimilarity], result of:
              0.01225174 = score(doc=596,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.19432661 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=596)
          0.5 = coord(1/2)
        0.031018337 = weight(_text_:medien in 596) [ClassicSimilarity], result of:
          0.031018337 = score(doc=596,freq=4.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.36770552 = fieldWeight in 596, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.009045405 = product of:
          0.013568108 = sum of:
            0.0013163678 = weight(_text_:a in 596) [ClassicSimilarity], result of:
              0.0013163678 = score(doc=596,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.06369744 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=596)
            0.01225174 = weight(_text_:29 in 596) [ClassicSimilarity], result of:
              0.01225174 = score(doc=596,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.19432661 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=596)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Abstract
    Ziel der vorliegenden Arbeit war die Erstellung einer Konkordanz zwischen der Regensburger Verbundklassifikation (RVK) und der Basisklassifikation (BK) für den Fachbereich Recht. Die Erstellung von Konkordanzen ist im bibliothekarischen Bereichmehrfach von Interesse, da einerseits Notationen verschiedener Klassifikationssysteme zusammengeführt werden und somit eine höhere Datendichte erreicht werden kann. Zum anderen können Konkordanzen in der Suchmaschinentechnologie Primo als "Werkzeug" bei der facettierten Suche eingesetzt werden. Die Arbeit gliedert sich in zwei Teile. Der erste (theoretische) Teil beschäftigt sich mit Klassifikationen als Hilfsmittel für die Freihandaufstellung und als Teil der klassifikatorischen Sacherschließung. Im Anschluss daran werden drei große Klassifikationssysteme, die im Rahmen der Sacherschließung in Österreich eine wesentliche Rolle spielen (Verbundklassifikationen des OBV), dargestellt. Die Basisklassifikation und die Regensburger Verbundklassifikation werden kurz beschrieben, es wird untersucht wie juristische Medien in diesen Klassifikationen abgebildet werden. In diesem Zusammenhang wird auch der aktuelle Stand der RVK Erweiterung betreffend österreichisches Recht erörtert. Die Dewey - Dezimal - Klassifikation (DDC) wird auf ihre generelle Eignung als Klassifikation für juristische Medien genauer, anhand mehrerer praktischer Beispiele, untersucht. Insbesondere wird die "Konkordanzfähigkeit" der DDC im Hinblick auf die beiden anderen Systeme betreffend den Fachbereich Recht ermittelt. Ein kurzer Ausblick auf Unterschiede zwischen der angloamerikanischen Rechtsordnung und dem europäischen Civil Law ergänzt die Ausführungen zur DDC. Der zweite (praktische) Teil beinhaltet die Konkordanztabelle in Form einer Microsoft Excel Tabelle mit einem ausführlichen Kommentar. Diese Tabelle liegt auch in einer verkürzten Form vor, die für die praktische Umsetzung in der Verbunddatenbank vorgesehen ist.
    Date
    3. 2.2013 17:25:29
  20. Rötzer, A.: ¬Die Einteilung der Wissenschaften : Analyse und Typologisierung von Wissenschaftsklassifikationen (2003) 0.00
    0.0045029162 = product of:
      0.04502916 = sum of:
        0.023009168 = weight(_text_:neue in 684) [ClassicSimilarity], result of:
          0.023009168 = score(doc=684,freq=8.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.3150996 = fieldWeight in 684, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.02734375 = fieldNorm(doc=684)
        0.021712836 = weight(_text_:medien in 684) [ClassicSimilarity], result of:
          0.021712836 = score(doc=684,freq=4.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.25739387 = fieldWeight in 684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02734375 = fieldNorm(doc=684)
        3.071525E-4 = product of:
          9.214575E-4 = sum of:
            9.214575E-4 = weight(_text_:a in 684) [ClassicSimilarity], result of:
              9.214575E-4 = score(doc=684,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.044588212 = fieldWeight in 684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=684)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Abstract
    In dem Maße, in dem sich die Wissenschaften partikularisieren und atomisieren, wird es immer schwieriger, Überblick zu gewinnen auch schon über nah verwandte Wissenschaften. Daher wächst die Bedeutung der Klassifizierung hinsichtlich ihrer pragmatischen Funktionen stark an. Zudem sind es heute besonders die Querschnittswissenschaften, die im Zentrum des Forschungsinteresses stehen. Dort werden derzeit die größten Fortschritte gemacht. Man denke dabei nur an die Krebsforschung, die sich im molekularen Bereich in einen Forschungsraum zwischen Chemie und Biologie bewegt. Gerade die Medizin bietet viele Beispiele dieser die Wissenschaftsgrenzen überschreitenden Forschungen: der ganze Bereich der Gentechnik, die Nanotechnik, aber auch die medizinische Informatik und Robotik. Aus diesem Grund sind es nicht nur pragmatische Funktionen, die von einer heutigen Wissenschaftsklassifikation bedient werden müssen, sondern auch epistemologische. Wissenschaftsklassifikationen bieten die Möglichkeit, Zusammenhänge zwischen den Wissenschaften erkennbar machen und eröffnen damit unter Umständen neue Wege der Forschung. Dennoch geriet die Wissenschaftsklassifikation gerade in den letzten Jahren in eine Krise. Die Absage an die Systemhaftigkeit des Ganzen der Wissenschaft, die sich im Zuge der postmodernen Theorie durchgesetzt hat, stellte die Wissenschaftsklassifikation vor Probleme, die sie mit den üblichen Ansätzen nicht lösen konnte. Neue Wege der Klassifikation vor dem Hintergrund der Erkenntnisse dieser neuen Theorieansätze galt es nun zu finden. Jede Zeit findet sich ihre Problemlösungswege, und so hat sich auch für die Wissenschaftsklassifikation der Gegenwart neue Möglichkeiten eröffnet, die sich mit Hilfe der neuen Medien verwirklichen lassen.
    Durch die rasche Vermehrung und erhöhte Verschränkung der Wissenschaften stoßen die klassischen zweidimensionalen und hierarchischen Klassifikationen heute an eine Grenze. Die eindeutige Hierarchisierung kann hier nur auf Kosten der potentiell auszubildenden Beziehungen zwischen den zu klassifizierenden Wissenschaften gehen, denn, um die Logik der Hierarchie zu bewahren, muss häufig auf die Logik der inhaltlichen Zusammenhänge verzichten werden. Eine Lösung in Form von mehrdimensionalen Verbindungen und In-Bezug-Setzungen bieten die Darstellungsmöglichkeiten der neuen Medien. Einen Schritt in diese Richtung unternahm ARTUR P. SCHMIDT mit seinem 1999 auch als CD-Rom erschienen 'Wissensnavigator'. Unter Bezugnahme auf Deleuzes und Guattaris 'Rhizom' fordert er eine ungehinderte Vernetzung des Wissens in alle Richtungen. Er sieht sich damit im Einklang mit den Entwicklungen seiner Zeit. Interaktive Benutzung soll diese totale Vernetzung des Wissens generieren, indem der Benutzer der Enzyklopädie durch seine Anfragen bei ihrer Evolution mitwirkt. Die Darstellbarkeit dieser Vernetzung soll mit Hilfe eines sich in einem 4-dimensionalen Raum befindlichen "Hyperkubus" ermöglicht werden, der "in einer Matrix ein neuronales Netzwerk" enthalten soll. Neben diesem wohl noch als utopisch zu bezeichnenden Projekt gibt es derzeit eine Anzahl konservativerer Ansätze der Klassifizierung im Internet, die größte Differenzierungen erlauben, aber auf ungeregelte 'Hyperverlinkung' verzichten. Sollten jedoch Projekte wie die ARTUR P. SCHMIDTS realisiert werden können, so ist damit vielleicht auch Nietzsches Forderung zu erfüllen, die er noch in weiter Ferne vermutete.
    Mit Hilfe mehrdimensionaler In-Bezug-Setzungen könnte auch einem wechselseitigen Prozess entgegnet werden, der sich seit gut einem Jahrhundert mehr und mehr dynamisierte: Gleichzeitig mit dem Differenzierungsprozess der Wissenschaften, der besonders in den letzten Jahrzehnten immer rascher fortschreitet und hybride Wissenschaftsbezeichnungen wie 'Physikochemie', 'Biophysik' oder 'Soziobiologie' entstehen ließ, geht ein Integrationsprozess einher, der diese Wissenschaften sich wieder annähern lässt. Diese Gleichzeitigkeit der entgegengesetzten Prozesse von Differenzierung und Integration könnte eine neue Form der Einheit der Wissenschaften schaffen, die eine Entwicklung der Wissenschaften zu einem einheitlichen Ganzes in Gang setzt. Damit könnten die großen Potentiale, die in diesem dialektischen Prozess von Differenzierung und Integration hinsichtlich neuer Forschungsergebnisse verwirklicht werden.

Languages

  • d 257
  • e 43
  • f 2
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 22
  • m 17
  • a 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications