Search (437 results, page 1 of 22)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.18
    0.1820769 = product of:
      0.5462307 = sum of:
        0.13655767 = product of:
          0.40967298 = sum of:
            0.40967298 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.40967298 = score(doc=973,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.40967298 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.40967298 = score(doc=973,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.12
    0.12459349 = product of:
      0.24918698 = sum of:
        0.045519225 = product of:
          0.13655767 = sum of:
            0.13655767 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.13655767 = score(doc=5820,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.010546046 = weight(_text_:in in 5820) [ClassicSimilarity], result of:
          0.010546046 = score(doc=5820,freq=18.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 5820, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.1931217 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1931217 = score(doc=5820,freq=4.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(3/6)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.12
    0.12001237 = product of:
      0.24002475 = sum of:
        0.056899026 = product of:
          0.17069708 = sum of:
            0.17069708 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.17069708 = score(doc=4997,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.012428636 = weight(_text_:in in 4997) [ClassicSimilarity], result of:
          0.012428636 = score(doc=4997,freq=16.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.21253976 = fieldWeight in 4997, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.17069708 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.17069708 = score(doc=4997,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(3/6)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  4. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.12
    0.11917981 = product of:
      0.23835962 = sum of:
        0.056899026 = product of:
          0.17069708 = sum of:
            0.17069708 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.17069708 = score(doc=4388,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.010763514 = weight(_text_:in in 4388) [ClassicSimilarity], result of:
          0.010763514 = score(doc=4388,freq=12.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18406484 = fieldWeight in 4388, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.17069708 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.17069708 = score(doc=4388,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(3/6)
    
    Abstract
    Werden Maschinen mit Begriffen beschrieben, die ursprünglich der Beschreibung des Menschen dienen, so liegt zunächst der Verdacht nahe, dass jene Maschinen spezifischmenschliche Fähigkeiten oder Eigenschaften besitzen. Für körperliche Fähigkeiten, die mechanisch nachgeahmt werden, hat sich in der Alltagssprache eine anthropomorphisierende Sprechweise bereits etabliert. So wird kaum in Frage gestellt, dass bestimmte Maschinen weben, backen, sich bewegen oder arbeiten können. Bei nichtkörperlichen Eigenschaften, etwa kognitiver, sozialer oder moralischer Art sieht dies jedoch anders aus. Dass mittlerweile intelligente und rechnende Maschinen im alltäglichen Sprachgebrauch Eingang gefunden haben, wäre jedoch undenkbar ohne den langjährigen Diskurs über Künstliche Intelligenz, welcher insbesondere die zweite Hälfte des vergangenen Jahrhunderts geprägt hat. In jüngster Zeit ist es der Autonomiebegriff, welcher zunehmend Verwendung zur Beschreibung neuer Technologien findet, wie etwa "autonome mobile Roboter" oder "autonome Systeme". Dem Begriff nach rekurriert die "Autonomie" jener Technologien auf eine bestimmte Art technologischen Fortschritts, die von der Fähigkeit zur Selbstgesetzgebung herrührt. Dies wirft aus philosophischer Sicht jedoch die Frage auf, wie die Selbstgesetzgebung in diesem Fall definiert ist, zumal sich der Autonomiebegriff in der Philosophie auf die politische oder moralische Selbstgesetzgebung von Menschen oder Menschengruppen beziehungsweise ihre Handlungen bezieht. Im Handbuch Robotik hingegen führt der Autor geradezu beiläufig die Bezeichnung "autonom" ein, indem er prognostiziert, dass "[.] autonome Roboter in Zukunft sogar einen Großteil der Altenbetreuung übernehmen werden."
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.12
    0.1170504 = product of:
      0.2341008 = sum of:
        0.01179084 = weight(_text_:in in 563) [ClassicSimilarity], result of:
          0.01179084 = score(doc=563,freq=10.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.20163295 = fieldWeight in 563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.20483649 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20483649 = score(doc=563,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017473478 = product of:
          0.034946956 = sum of:
            0.034946956 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034946956 = score(doc=563,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.12
    0.11690521 = product of:
      0.23381042 = sum of:
        0.056899026 = product of:
          0.17069708 = sum of:
            0.17069708 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.17069708 = score(doc=855,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.006214318 = weight(_text_:in in 855) [ClassicSimilarity], result of:
          0.006214318 = score(doc=855,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.10626988 = fieldWeight in 855, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.17069708 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.17069708 = score(doc=855,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(3/6)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.12
    0.11690521 = product of:
      0.23381042 = sum of:
        0.056899026 = product of:
          0.17069708 = sum of:
            0.17069708 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.17069708 = score(doc=1000,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.006214318 = weight(_text_:in in 1000) [ClassicSimilarity], result of:
          0.006214318 = score(doc=1000,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.10626988 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.17069708 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.17069708 = score(doc=1000,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(3/6)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09761506 = product of:
      0.19523013 = sum of:
        0.045519225 = product of:
          0.13655767 = sum of:
            0.13655767 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13655767 = score(doc=701,freq=2.0), product of:
                0.36446604 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042989567 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.013153232 = weight(_text_:in in 701) [ClassicSimilarity], result of:
          0.013153232 = score(doc=701,freq=28.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.22493094 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13655767 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13655767 = score(doc=701,freq=2.0), product of:
            0.36446604 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042989567 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.04
    0.041216135 = product of:
      0.08243227 = sum of:
        0.010546046 = weight(_text_:in in 4399) [ClassicSimilarity], result of:
          0.010546046 = score(doc=4399,freq=18.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 4399, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.060237244 = weight(_text_:great in 4399) [ClassicSimilarity], result of:
          0.060237244 = score(doc=4399,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.24884763 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.011648986 = product of:
          0.023297971 = sum of:
            0.023297971 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.023297971 = score(doc=4399,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
  10. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.02
    0.024138257 = product of:
      0.07241477 = sum of:
        0.012177527 = weight(_text_:in in 694) [ClassicSimilarity], result of:
          0.012177527 = score(doc=694,freq=24.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.2082456 = fieldWeight in 694, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
        0.060237244 = weight(_text_:great in 694) [ClassicSimilarity], result of:
          0.060237244 = score(doc=694,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.24884763 = fieldWeight in 694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
      0.33333334 = coord(2/6)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
    Content
    Submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science in the College of Computer Science at Northeastern University, Boston, MA. Vgl.: http://www.stanford.edu/~natalya/papers/Thesis.pdf.
  11. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.02
    0.020219114 = product of:
      0.060657337 = sum of:
        0.014061396 = weight(_text_:in in 5815) [ClassicSimilarity], result of:
          0.014061396 = score(doc=5815,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.24046129 = fieldWeight in 5815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=5815)
        0.046595942 = product of:
          0.093191884 = sum of:
            0.093191884 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.093191884 = score(doc=5815,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    4. 8.2006 21:35:22
  12. García Barrios, V.M.: Informationsaufbereitung und Wissensorganisation in transnationalen Konzernen : Konzeption eines Informationssystems für große und geographisch verteilte Unternehmen mit dem Hyperwave Information System (2002) 0.02
    0.020107657 = product of:
      0.060322966 = sum of:
        0.0076109543 = weight(_text_:in in 6000) [ClassicSimilarity], result of:
          0.0076109543 = score(doc=6000,freq=6.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.1301535 = fieldWeight in 6000, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6000)
        0.052712012 = weight(_text_:education in 6000) [ClassicSimilarity], result of:
          0.052712012 = score(doc=6000,freq=2.0), product of:
            0.2025344 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.042989567 = queryNorm
            0.260262 = fieldWeight in 6000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6000)
      0.33333334 = coord(2/6)
    
    Abstract
    Transnationale Konzerne haben einen dringenden Bedarf an einer umfassenden Lösung für ihre Intranetsysteme. Die spezifischen Anforderungen an ein wissensbasiertes Informationssystem sind vielfältig, die kritischsten davon sind aber allgemein gültig und ergeben sich aufgrund der stark vernetzten und geographisch verteilten Struktur des Konzerns. In unterschiedlichen Wissensdisziplinen, insbesondere Wissensmanagement, Informationsmanagement, Datenmanagement und Wissensorganisation, versucht man, oftmals in den einzelnen Disziplinen isoliert, die Erfüllung spezifischer Anforderungen zu implementieren. Nicht selten auf eine ineffektive Weise. Die nachfolgende Arbeit verfolgt deshalb einen ganzheitlichen Ansatz über die Wissensdisziplinen, um den umfangreichen Anforderungen gerecht werden zu können. Im Untersuchungsbereich der vorliegenden Arbeit wird die Problematik aus der Sicht der wichtigsten involvierten Wissensdisziplinen beleuchtet, um nach vorhandenen bzw. etablierten Lösungsansätzen zu suchen. Die spezifischen Einflussbereiche der Disziplinen auf Intranetlösungen werden überprüft und kritischen Aspekten von Anforderungen (beispielsweise 'starke örtliche Verteilung vs. Systemtransparenz', 'Replikationsmaßnahmen vs. Systemperformanz' oder 'semantische Wissensmodelle vs. bedarfsgerechten Wissenszugang') gegenübergestellt. Jede Disziplin bietet effiziente und effektive Lösungen für unterschiedliche Aspekte, es konnte jedoch kein umfassendes Gestaltungsmodell, welches die spezifischen Lösungsansätze der Disziplinen vereint, im Rahmen des Rechercheprozesses identifiziert werden. Aufgrund des oben beschriebenen Sachverhalts wird im Gestaltungsbereich dieser Arbeit ein zweiteiliges Technisches Gestaltungsmodell vorgestellt. Es besteht aus einem strategischen Analyseschema und einem funktionalen Komponentenschema, und berücksichtigt die Einflussbereiche oben erwähnter Wissensdisziplinen. Basierend auf der konkreten Anforderung einer Intranetlösung für einen transnationalen - und anonymisiert dargestellten - Konzern, wird das vorgestellte Modell angewandt, und auf Basis des Hyperwave Information Servers die technische Umsetzung eines wissensbasierten Informationssystems, von dem beispielhaft zwei Module näher beschrieben werden, gezeigt.
    Content
    Auch unter: http://www2.iicm.edu/cguetl/education/thesis/vgarcia/index.html.
  13. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.02
    0.017691724 = product of:
      0.053075172 = sum of:
        0.012303721 = weight(_text_:in in 5309) [ClassicSimilarity], result of:
          0.012303721 = score(doc=5309,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.21040362 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5309)
        0.04077145 = product of:
          0.0815429 = sum of:
            0.0815429 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.0815429 = score(doc=5309,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    5. 8.2006 13:07:22
  14. Hoffmann, R.: Entwicklung einer benutzerunterstützten automatisierten Klassifikation von Web - Dokumenten : Untersuchung gegenwärtiger Methoden zur automatisierten Dokumentklassifikation und Implementierung eines Prototyps zum verbesserten Information Retrieval für das xFIND System (2002) 0.02
    0.016400103 = product of:
      0.049200308 = sum of:
        0.007030698 = weight(_text_:in in 4197) [ClassicSimilarity], result of:
          0.007030698 = score(doc=4197,freq=8.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.120230645 = fieldWeight in 4197, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4197)
        0.04216961 = weight(_text_:education in 4197) [ClassicSimilarity], result of:
          0.04216961 = score(doc=4197,freq=2.0), product of:
            0.2025344 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.042989567 = queryNorm
            0.2082096 = fieldWeight in 4197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.03125 = fieldNorm(doc=4197)
      0.33333334 = coord(2/6)
    
    Abstract
    Das unüberschaubare und permanent wachsende Angebot von Informationen im Internet ermöglicht es den Menschen nicht mehr, dieses inhaltlich zu erfassen oder gezielt nach Informationen zu suchen. Einen Lösungsweg zur verbesserten Informationsauffindung stellt hierbei die Kategorisierung bzw. Klassifikation der Informationen auf Basis ihres thematischen Inhaltes dar. Diese thematische Klassifikation kann sowohl anhand manueller (intellektueller) Methoden als auch durch automatisierte Verfahren erfolgen. Doch beide Ansätze für sich konnten die an sie gestellten Erwartungen bis zum heutigen Tag nur unzureichend erfüllen. Im Rahmen dieser Arbeit soll daher der naheliegende Ansatz, die beiden Methoden sinnvoll zu verknüpfen, untersucht werden. Im ersten Teil dieser Arbeit, dem Untersuchungsbereich, wird einleitend das Problem des Informationsüberangebots in unserer Gesellschaft erläutert und gezeigt, dass die Kategorisierung bzw. Klassifikation dieser Informationen speziell im Internet sinnvoll erscheint. Die prinzipiellen Möglichkeiten der Themenzuordnung von Dokumenten zur Verbesserung der Wissensverwaltung und Wissensauffindung werden beschrieben. Dabei werden unter anderem verschiedene Klassifikationsschemata, Topic Maps und semantische Netze vorgestellt. Schwerpunkt des Untersuchungsbereiches ist die Beschreibung automatisierter Methoden zur Themenzuordnung. Neben einem Überblick über die gebräuchlichsten Klassifikations-Algorithmen werden sowohl am Markt existierende Systeme sowie Forschungsansätze und frei verfügbare Module zur automatischen Klassifikation vorgestellt. Berücksichtigt werden auch Systeme, die zumindest teilweise den erwähnten Ansatz der Kombination von manuellen und automatischen Methoden unterstützen. Auch die in Zusammenhang mit der Klassifikation von Dokumenten im Internet auftretenden Probleme werden aufgezeigt. Die im Untersuchungsbereich gewonnenen Erkenntnisse fließen in die Entwicklung eines Moduls zur benutzerunterstützten, automatischen Dokumentklassifikation im Rahmen des xFIND Systems (extended Framework for Information Discovery) ein. Dieses an der technischen Universität Graz konzipierte Framework stellt die Basis für eine Vielzahl neuer Ideen zur Verbesserung des Information Retrieval dar. Der im Gestaltungsbereich entwickelte Lösungsansatz sieht zunächst die Verwendung bereits im System vorhandener, manuell klassifizierter Dokumente, Server oder Serverbereiche als Grundlage für die automatische Klassifikation vor. Nach erfolgter automatischer Klassifikation können in einem nächsten Schritt dann Autoren und Administratoren die Ergebnisse im Rahmen einer Benutzerunterstützung anpassen. Dabei kann das kollektive Benutzerverhalten durch die Möglichkeit eines Votings - mittels Zustimmung bzw. Ablehnung der Klassifikationsergebnisse - Einfluss finden. Das Wissen von Fachexperten und Benutzern trägt somit letztendlich zur Verbesserung der automatischen Klassifikation bei. Im Gestaltungsbereich werden die grundlegenden Konzepte, der Aufbau und die Funktionsweise des entwickelten Moduls beschrieben, sowie eine Reihe von Vorschlägen und Ideen zur Weiterentwicklung der benutzerunterstützten automatischen Dokumentklassifikation präsentiert.
    Content
    Auch unter: http://www2.iicm.edu/cguetl/education/thesis/rhoff
  15. Stanz, G.: Medienarchive: Analyse einer unterschätzten Ressource : Archivierung, Dokumentation, und Informationsvermittlung in Medien bei besonderer Berücksichtigung von Pressearchiven (1994) 0.02
    0.015164334 = product of:
      0.045493003 = sum of:
        0.010546046 = weight(_text_:in in 9) [ClassicSimilarity], result of:
          0.010546046 = score(doc=9,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 9, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=9)
        0.034946956 = product of:
          0.06989391 = sum of:
            0.06989391 = weight(_text_:22 in 9) [ClassicSimilarity], result of:
              0.06989391 = score(doc=9,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46428138 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 2.1997 19:50:29
  16. Hartwieg, U.: ¬Die nationalbibliographische Situation im 18. Jahrhundert : Vorüberlegungen zur Verzeichnung der deutschen Drucke in einem VD18 (1999) 0.02
    0.015164334 = product of:
      0.045493003 = sum of:
        0.010546046 = weight(_text_:in in 3813) [ClassicSimilarity], result of:
          0.010546046 = score(doc=3813,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 3813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=3813)
        0.034946956 = product of:
          0.06989391 = sum of:
            0.06989391 = weight(_text_:22 in 3813) [ClassicSimilarity], result of:
              0.06989391 = score(doc=3813,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.46428138 = fieldWeight in 3813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3813)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    18. 6.1999 9:22:36
  17. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.01
    0.014297072 = product of:
      0.042891216 = sum of:
        0.009942909 = weight(_text_:in in 4204) [ClassicSimilarity], result of:
          0.009942909 = score(doc=4204,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.17003182 = fieldWeight in 4204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
        0.032948308 = product of:
          0.065896615 = sum of:
            0.065896615 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.065896615 = score(doc=4204,freq=4.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Description of an experimental trend-predicting exercise covering a time period as far as 50 years into the future. The Delphi technique is used in soliciting the opinions of experts in six areas: scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapon systems. Possible objections to the approach are also discussed.
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  18. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.01
    0.012525268 = product of:
      0.037575804 = sum of:
        0.007457182 = weight(_text_:in in 4472) [ClassicSimilarity], result of:
          0.007457182 = score(doc=4472,freq=36.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.12752387 = fieldWeight in 4472, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.030118622 = weight(_text_:great in 4472) [ClassicSimilarity], result of:
          0.030118622 = score(doc=4472,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.12442382 = fieldWeight in 4472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.33333334 = coord(2/6)
    
    Abstract
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  19. Rautenberg, M.: Konzeption eines Internetangebots für Kinder über Buchautoren und -autorinnen im Rahmen der europäischen virtuellen Kinderbibliothek CHILIAS (1997) 0.01
    0.011825167 = product of:
      0.0354755 = sum of:
        0.012177527 = weight(_text_:in in 1491) [ClassicSimilarity], result of:
          0.012177527 = score(doc=1491,freq=6.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.2082456 = fieldWeight in 1491, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1491)
        0.023297971 = product of:
          0.046595942 = sum of:
            0.046595942 = weight(_text_:22 in 1491) [ClassicSimilarity], result of:
              0.046595942 = score(doc=1491,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.30952093 = fieldWeight in 1491, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1491)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Unter dem Projektnamen CHILIAS entsteht in den Jahren 1996-1998 in Kooperation verschiedener europäischer Partner die virtuelle Simulation einer Kinderbibliothek auf dem Internet. Hintergrund ist das EU-Förderprogramm Telematik für Bibliotheken. Ein Konzept für ein Internetangebote über Autoren und Literatur für Kinder als Zielgruppe wird erarbeitet. Durch den beispielhaften Entwurf einer Internetseite über die Schriftstellerin Astrid Lindgren wird das Konzept konkretisiert und veranschaulicht. Eine visuelle Umsetzung des Konzepts liegt im HTML-Format auf Diskette bei
    Content
    Als Beilage: Die Autorengalerie: Demoversion in HTML
    Date
    22. 7.1998 18:00:49
  20. Hoffmann, R.: Mailinglisten für den bibliothekarischen Informationsdienst am Beispiel von RABE (2000) 0.01
    0.011752427 = product of:
      0.03525728 = sum of:
        0.010546046 = weight(_text_:in in 4441) [ClassicSimilarity], result of:
          0.010546046 = score(doc=4441,freq=8.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 4441, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4441)
        0.024711233 = product of:
          0.049422465 = sum of:
            0.049422465 = weight(_text_:22 in 4441) [ClassicSimilarity], result of:
              0.049422465 = score(doc=4441,freq=4.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.32829654 = fieldWeight in 4441, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4441)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Welche qualitativen Verbesserungsmöglichkeiten bieten Mailinglisten für den bibliothekarischen Informationsdienst? Nach einem Überblick über Aufgaben und Formen bibliothekarischer Informationsdienstleistungen und einer Einführung in Mailinglisten für den Informationsdienst wird in dieser Arbeit exemplarisch die im Juli als deutschsprachigen Pendant zu Stumpers-L für Auskunftsbibliothekare gegründete Mailingliste 'RABE' (Recherche und Auskunft in bibliothekarischen Einrichtungen) beschrieben und analysiert. Dazu werden die Ergebnisse zweier empirischer Untersuchungen herangezogen: einer im März 1999 durchgeführten Umfrage unter den Listenmitgliedern und einer Auswertung der im WWW-Archiv von RABE bis Februar 1999 gespeicherten Listenbeiträge. Unter anderem wurde dabei die institutionelle und geographische Herkunft der Listenmitglieder, ihr Nutzungsverhalten (Aktivitätsprofile), ihre Erfahrungen mit RABE und die zur Beantwortung von Auskunftsfragen verwendeten Informationsquellen untersucht. Abschließend erfolgt eine Bewertung von RABE als Instrument im bibliothekarischen Informationsdienst
    Date
    22. 2.2000 10:25:05
    Footnote
    [Diplomarbeit in Studiengang Öffentliches Bibliothekswesen 1999]
    Series
    Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft; Bd.22

Languages

  • d 386
  • e 43
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 29
  • m 16
  • r 2
  • a 1
  • More… Less…

Themes

Subjects

Classifications