Search (69 results, page 1 of 4)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.23
    0.22900586 = product of:
      0.57251465 = sum of:
        0.14312866 = product of:
          0.429386 = sum of:
            0.429386 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.429386 = score(doc=973,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.429386 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.429386 = score(doc=973,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.15
    0.15368767 = product of:
      0.2561461 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.17891084 = score(doc=4997,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.17891084 = score(doc=4997,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.017598324 = product of:
          0.035196647 = sum of:
            0.035196647 = weight(_text_:data in 4997) [ClassicSimilarity], result of:
              0.035196647 = score(doc=4997,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24703519 = fieldWeight in 4997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  3. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.15
    0.15059501 = product of:
      0.25099167 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.17891084 = score(doc=1000,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.17891084 = score(doc=1000,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 1000) [ClassicSimilarity], result of:
              0.024887787 = score(doc=1000,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.11
    0.11247476 = product of:
      0.2811869 = sum of:
        0.214693 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.214693 = score(doc=563,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 563) [ClassicSimilarity], result of:
            0.029865343 = score(doc=563,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.036628567 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.036628567 = score(doc=563,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.4 = coord(2/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.100049615 = product of:
      0.25012404 = sum of:
        0.047709554 = product of:
          0.14312866 = sum of:
            0.14312866 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14312866 = score(doc=5820,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.2024145 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.2024145 = score(doc=5820,freq=4.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.17891084 = score(doc=4388,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.17891084 = score(doc=4388,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.17891084 = score(doc=855,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.17891084 = score(doc=855,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.07633529 = product of:
      0.19083822 = sum of:
        0.047709554 = product of:
          0.14312866 = sum of:
            0.14312866 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14312866 = score(doc=701,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14312866 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14312866 = score(doc=701,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Richter, S.: ¬Die formale Beschreibung von Dokumenten in Archiven und Bibliotheken : Perspektiven des Datenaustauschs (2004) 0.04
    0.042138997 = product of:
      0.10534749 = sum of:
        0.075167626 = weight(_text_:readable in 4982) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4982,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4982)
        0.030179864 = weight(_text_:bibliographic in 4982) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4982,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4982)
      0.4 = coord(2/5)
    
    Abstract
    Die Datenrecherche und der Zugriff auf Informationen wurde in den letzten Jahren durch Angebote im Internet erleichtert. Während im Bibliothekswesen bereits seit Jahrzehnten Erfahrungen durch die Verbundkatalogisierung im Bereich des Datenaustauschs gesammelt wurden, wurde eine kooperative Datenhaltung zwischen Archiven erst in den letzten Jahren begonnen. In dieser Arbeit wird der Frage nachgegangen, inwieweit Daten aus Archiven und Bibliotheken in gemeinsamen Datenpools angeboten werden können: Sind die Inhalte der verschiedenen Datenkategorien ähnlich genug, um sie zusammenfassen zu können? Welche Standards liegen den Daten zugrunde? Um diese Fragen beantworten zu können, werden zunächst die verschiedenen Regelwerke des Archivs- und Bibliothekswesens zur archivischen Verzeichnung bzw. der bibliographischen Beschreibung untersucht und anschließend die darauf fußenden Austauschformate. Folgende (Regel-) Werke werden in die Analyse integiert: Papritz: Die archivische Titelaufnahme bei Sachakten, die Ordnungs- und Verzeichnungsgrundsätze für die staatlichen Archive der Deutschen Demokratischen Republik (OVG-DDR), Internationale Grundsätze für die archivische Verzeichnung (ISAD(G)), das Handbuch für Wirtschaftsarchive, Praktische Archivkunde, die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (RAK-WB), die Anglo-American Cataloguing Rules (AACR), General International Standard Bibliographic Description (ISBD(G)) und für den Bereich der Nachlasserschließung als Schnittstelle zwischen Archiven und Bibliotheken die Ordnungs- und Verzeichnungsgrundsätze [des Goethe- und Schiller-Archivs] (OVG-GSA), König: Verwaltung und wissenschaftliche Erschließung von Nachlässen in Literaturarchiven sowie die Regeln zur Erschließung von Nachlässen und Autographen (RNA). Von den Datenaustauschformaten werden Encoded Archival Description (EAD), Maschinelles Austauschformat für Bibliotheken (MAB) und Machine Readable Cataloguing (MARC) vorgestellt. Die Analyse zeigt, dass Daten aus Archiven und Bibliotheken in einer gemeinsamen Datenmenge zur Verfügung gestellt werden können, um sie für eine spartenübergreifende Recherche nutzbar zu machen. Es muss aber eingeräumt werden, dass im Austauschformat für ähnliche Beschreibungselemente nicht identische Kategorienummern verwendet werden können, da hierfür die Inhalte der Kategorien zu stark differieren. Aus diesem Grund kann das MAB-Format auch nicht ohne weiteres für archivische Elemente verwendet werden: Entweder müsste das bestehende MAB-Schema an die Belange des Archivwesens angepasst werden oder es müsste ein neues Austauschformat generiert werden, da auch das internationale EAD-Format nicht ohne Änderungen auf die deutsche Verzeichnungstradition abgebildet werden kann. Insbesondere wäre sowohl innerhalb der Sparten Archiv- und Bibliothekswesen als auch darüber hinaus eine tiefere Diskussion um verbindliche Regelwerke und Austauschformate zu empfehlen.
  10. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.03
    0.030975739 = product of:
      0.077439345 = sum of:
        0.06748423 = weight(_text_:bibliographic in 535) [ClassicSimilarity], result of:
          0.06748423 = score(doc=535,freq=10.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3847152 = fieldWeight in 535, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.009955115 = product of:
          0.01991023 = sum of:
            0.01991023 = weight(_text_:data in 535) [ClassicSimilarity], result of:
              0.01991023 = score(doc=535,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.1397442 = fieldWeight in 535, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
  11. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.02
    0.01723492 = product of:
      0.0861746 = sum of:
        0.0861746 = sum of:
          0.05565079 = weight(_text_:data in 642) [ClassicSimilarity], result of:
            0.05565079 = score(doc=642,freq=10.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.39059696 = fieldWeight in 642, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=642)
          0.030523809 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
            0.030523809 = score(doc=642,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 642, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=642)
      0.2 = coord(1/5)
    
    Abstract
    The past decade has seen the emergence of a new paradigm in the corporate world where organisations emphasised connectivity as a means of exposing decision-makers to wider resources of information within and outside the organisation. Many organisations followed the initiatives of enhancing infrastructures, manipulating cultural shifts and emphasising managerial commitment for creating pools and networks of knowledge. However, the concept of connectivity is not merely presenting people with the data, but more importantly, to create environments where people can seek information efficiently. This paradigm has therefore caused a shift in the function of information systems in organisations. They have to be now assessed in relation to how they underpin people's information-seeking activities within the context of their organisational environment. This research project used interpretative research methods to investigate the nature of people's information-seeking activities at two culturally contrasting organisations. Outcomes of this research project provide insights into phenomena associated with people's information-seeking function, and show how they depend on the organisational context that is defined partly by information systems. It suggests that information-seeking is not just searching for data. The inefficiencies inherent in both people and their environments can bring opaqueness into people's data, which they need to avoid or eliminate as part of seeking information. This seems to have made information-seeking a two-tier process consisting of a primary process of searching and interpreting data and auxiliary process of avoiding and eliminating opaqueness in data. Based on this view, this research suggests that organisational information systems operate naturally as implicit dual-mechanisms to underpin the above two-tier process, and that improvements to information systems should concern maintaining the balance in these dual-mechanisms.
    Date
    22. 7.2022 12:16:58
  12. Ruther, D.: Möglichkeit zur Realisierung des FRBR-Modells im Rahmen des relationalen Datenbankmodells (2015) 0.01
    0.012071946 = product of:
      0.060359728 = sum of:
        0.060359728 = weight(_text_:bibliographic in 1747) [ClassicSimilarity], result of:
          0.060359728 = score(doc=1747,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 1747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
      0.2 = coord(1/5)
    
    Abstract
    "Functional Requirements for Bibliographic Records" bezeichnet ein Datenmodell, welches es ermöglicht bibliographische Datensätze hierarchisch darzustellen. Dazu werden Entitäten definiert, welche untereinander in Verbindung stehen und so die katalogisierten Medien beschreiben. In dieser Arbeit wird das FRBR-Modell in Form einer relationalen Datenbank realisiert. Dazu wird das Programm SQL-Server 2014 genutzt, um es später mit dem linearen Datenbanksystem "Midos6" in Hinblick auf Datenmodulation und daraus resultierende Darstellungsmöglichkeiten zu vergleichen.
  13. Höllstin, A.: Bibliotheks- und Informationskompetenz (Bibliographic Instruction und Information Literacy) : Fallstudie über eine amerikanische Universitätsbibliothek basierend auf theoretischen Grundlagen und praktischen Anleitungen (Workbooks) (1997) 0.01
    0.010562953 = product of:
      0.052814763 = sum of:
        0.052814763 = weight(_text_:bibliographic in 1485) [ClassicSimilarity], result of:
          0.052814763 = score(doc=1485,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.30108726 = fieldWeight in 1485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1485)
      0.2 = coord(1/5)
    
  14. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.01
    0.010562953 = product of:
      0.052814763 = sum of:
        0.052814763 = weight(_text_:bibliographic in 6090) [ClassicSimilarity], result of:
          0.052814763 = score(doc=6090,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.30108726 = fieldWeight in 6090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6090)
      0.2 = coord(1/5)
    
    Abstract
    Diese Arbeit befasst sich mit den praktischen Aspekten des Automatischen Klassifizierens bibliographischer Referenzdaten. Im Vordergrund steht die konkrete Vorgehensweise anhand des eigens zu diesem Zweck entwickelten Open Source-Programms COBRA "Classification Of Bibliographic Records, Automatic". Es werden die Rahmenbedingungen und Parameter f¨ur einen Einsatz im bibliothekarischen Umfeld geklärt. Schließlich erfolgt eine Auswertung von Klassifizierungsergebnissen am Beispiel sozialwissenschaftlicher Daten aus der Datenbank SOLIS.
  15. Decker, B.: Data Mining in Öffentlichen Bibliotheken (2000) 0.01
    0.009855061 = product of:
      0.049275305 = sum of:
        0.049275305 = product of:
          0.09855061 = sum of:
            0.09855061 = weight(_text_:data in 4782) [ClassicSimilarity], result of:
              0.09855061 = score(doc=4782,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.69169855 = fieldWeight in 4782, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4782)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Theme
    Data Mining
  16. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.01
    0.009767618 = product of:
      0.04883809 = sum of:
        0.04883809 = product of:
          0.09767618 = sum of:
            0.09767618 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.09767618 = score(doc=5815,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    4. 8.2006 21:35:22
  17. Reinke, U.: ¬Der Austausch terminologischer Daten (1993) 0.01
    0.008904126 = product of:
      0.044520628 = sum of:
        0.044520628 = product of:
          0.089041255 = sum of:
            0.089041255 = weight(_text_:data in 4608) [ClassicSimilarity], result of:
              0.089041255 = score(doc=4608,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.6249551 = fieldWeight in 4608, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4608)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Diplomarbeit at the University of Saarbrücken which contains the following topics: data exchange format; terminology management systems; terminological databases; terminological record; data elements; data categories; data fields, etc.: hard- and software-related difficulties for the structure of records; description of approaches for the development of an exchange format for terminological data (MATER, MicroMATER, NTRF, SGML); considerations concerning an SGML-like exchange format; perspectives
  18. Menges, T.: Möglichkeiten und Grenzen der Übertragbarkeit eines Buches auf Hypertext am Beispiel einer französischen Grundgrammatik (Klein; Kleineidam) (1997) 0.01
    0.008546666 = product of:
      0.04273333 = sum of:
        0.04273333 = product of:
          0.08546666 = sum of:
            0.08546666 = weight(_text_:22 in 1496) [ClassicSimilarity], result of:
              0.08546666 = score(doc=1496,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5416616 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.1998 18:23:25
  19. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.01
    0.008546666 = product of:
      0.04273333 = sum of:
        0.04273333 = product of:
          0.08546666 = sum of:
            0.08546666 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.08546666 = score(doc=5309,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 8.2006 13:07:22
  20. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.01
    0.008546666 = product of:
      0.04273333 = sum of:
        0.04273333 = product of:
          0.08546666 = sum of:
            0.08546666 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.08546666 = score(doc=4635,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    26.11.2005 18:39:22

Years

Languages

  • d 45
  • e 20
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types