Search (276 results, page 1 of 14)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.15
    0.15452515 = product of:
      0.38631284 = sum of:
        0.09657821 = product of:
          0.28973463 = sum of:
            0.28973463 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.28973463 = score(doc=1826,freq=2.0), product of:
                0.3093153 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036484417 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.28973463 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.28973463 = score(doc=1826,freq=2.0), product of:
            0.3093153 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036484417 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.4 = coord(2/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.12
    0.123620115 = product of:
      0.3090503 = sum of:
        0.07726257 = product of:
          0.23178771 = sum of:
            0.23178771 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23178771 = score(doc=230,freq=2.0), product of:
                0.3093153 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036484417 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.23178771 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23178771 = score(doc=230,freq=2.0), product of:
            0.3093153 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036484417 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.08
    0.07726257 = product of:
      0.19315642 = sum of:
        0.048289105 = product of:
          0.14486732 = sum of:
            0.14486732 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.14486732 = score(doc=4388,freq=2.0), product of:
                0.3093153 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.14486732 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14486732 = score(doc=4388,freq=2.0), product of:
            0.3093153 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036484417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Open Knowledge Foundation: Prinzipien zu offenen bibliographischen Daten (2011) 0.03
    0.027474387 = product of:
      0.13737193 = sum of:
        0.13737193 = sum of:
          0.10241869 = weight(_text_:etc in 4399) [ClassicSimilarity], result of:
            0.10241869 = score(doc=4399,freq=6.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.5182672 = fieldWeight in 4399, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
          0.03495324 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.03495324 = score(doc=4399,freq=4.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.27358043 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
      0.2 = coord(1/5)
    
    Content
    "Bibliographische Daten Um den Geltungsbereich der Prinzipien festzulegen, wird in diesem ersten Teil der zugrundeliegende Begriff bibliographischer Daten erläutert. Kerndaten Bibliographische Daten bestehen aus bibliographischen Beschreibungen. Eine bibliographische Beschreibung beschreibt eine bibliographische Ressource (Artikel, Monographie etc. - ob gedruckt oder elektronisch) zum Zwecke 1. der Identifikation der beschriebenen Ressource, d.h. des Zeigens auf eine bestimmte Ressource in der Gesamtheit aller bibliographischer Ressourcen und 2. der Lokalisierung der beschriebenen Ressource, d.h. eines Hinweises, wo die beschriebene Ressource aufzufinden ist. Traditionellerweise erfüllte eine Beschreibung beide Zwecke gleichzeitig, indem sie Information lieferte über: Autor(en) und Herausgeber, Titel, Verlag, Veröffentlichungsdatum und -ort, Identifizierung des übergeordneten Werks (z.B. einer Zeitschrift), Seitenangaben. Im Web findet Identifikation statt mittels Uniform Resource Identifiers (URIs) wie z.B. URNs oder DOIs. Lokalisierung wird ermöglicht durch HTTP-URIs, die auch als Uniform Resource Locators (URLs) bezeichnet werden. Alle URIs für bibliographische Ressourcen fallen folglich unter den engen Begriff bibliographischer Daten. Sekundäre Daten Eine bibliographische Beschreibung kann andere Informationen enthalten, die unter den Begriff bibliographischer Daten fallen, beispielsweise Nicht-Web-Identifikatoren (ISBN, LCCN, OCLC etc.), Angaben zum Urheberrechtsstatus, administrative Daten und mehr; diese Daten können von Bibliotheken, Verlagen, Wissenschaftlern, Online-Communities für Buchliebhaber, sozialen Literaturverwaltungssystemen und Anderen produziert sein. Darüber hinaus produzieren Bibliotheken und verwandte Institutionen kontrollierte Vokabulare zum Zwecke der bibliographischen Beschreibung wie z. B. Personen- und Schlagwortnormdateien, Klassifikationen etc., die ebenfalls unter den Begriff bibliographischer Daten fallen."
    Date
    22. 3.2011 18:22:29
  5. Understanding metadata (2004) 0.03
    0.026831081 = product of:
      0.13415541 = sum of:
        0.13415541 = sum of:
          0.094610326 = weight(_text_:etc in 2686) [ClassicSimilarity], result of:
            0.094610326 = score(doc=2686,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.47875473 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
          0.039545078 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.039545078 = score(doc=2686,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
      0.2 = coord(1/5)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  6. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.03
    0.026831081 = product of:
      0.13415541 = sum of:
        0.13415541 = sum of:
          0.094610326 = weight(_text_:etc in 4996) [ClassicSimilarity], result of:
            0.094610326 = score(doc=4996,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.47875473 = fieldWeight in 4996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0625 = fieldNorm(doc=4996)
          0.039545078 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
            0.039545078 = score(doc=4996,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.30952093 = fieldWeight in 4996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4996)
      0.2 = coord(1/5)
    
    Content
    "Wir kennen das bereits von Google, Facebook und Amazon: Unser Internet-Verhalten wird automatisch erfasst, damit uns angepasste Inhalte präsentiert werden können. Ob uns diese Inhalte gefallen oder nicht, melden wir direkt oder indirekt zurück (Kauf, Klick etc.). Durch diese Feedbackschleife lernen solche Systeme immer besser, was sie uns präsentieren müssen, um unsere Bedürfnisse anzusprechen, und wissen implizit dadurch auch immer besser, wie sie unsere Bedürfniserfüllung - zur Konsumtion - manipulieren können."
    Date
    19. 2.2019 17:22:00
  7. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.02
    0.024243647 = product of:
      0.060609117 = sum of:
        0.01030084 = product of:
          0.02060168 = sum of:
            0.02060168 = weight(_text_:problems in 5988) [ClassicSimilarity], result of:
              0.02060168 = score(doc=5988,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.13680777 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
        0.050308276 = sum of:
          0.03547887 = weight(_text_:etc in 5988) [ClassicSimilarity], result of:
            0.03547887 = score(doc=5988,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.17953302 = fieldWeight in 5988, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0234375 = fieldNorm(doc=5988)
          0.014829405 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
            0.014829405 = score(doc=5988,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.116070345 = fieldWeight in 5988, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=5988)
      0.4 = coord(2/5)
    
    Content
    Noch einige Schritte weiter zurück. Oft haben mich Bibliothekare angesprochen, wie sollte man eine Bibliothek aufbauen? Wir schauen, sagten sie, in eine Bibliothek so hinein, als wäre sie wie ein Gedächtnis. "Das ist schön, aber wissen Sie, wie das Gedächtnis funktioniert? "Nein, aber viele Leute sagen, das Gedächtnis arbeitet wie eine große Bibliothek. Man muß nur hineingreifen und das richtige Buch finden. "Das ist alles wunderschön und sehr lieb, aber wissen Sie, die Leute, die ein Buch suchen, suchen es ja nur, weil sie ein Problem haben und hoffen, in dem Buch die Antwort für das Problem zu finden. Das Buch ist nur ein Zwischenträger von einer Frage und einer vielleicht in dem Buch zu findenden Antwort. Aber das Buch ist nicht die Antwort. "Aha, wie stellen Sie sich das vor? Wir sollten das Problem so sehen, daß die Inhalte der Bücher, die semantische Struktur - wenn man jetzt diesen Ausdruck wieder verwenden möchte - dieser Bücher in einem System sitzt, sodaß ich in diese semantische Struktur mit meiner Frage einsteigen kann, und mir die semantische Struktur dieses Systems sagt, dann mußt du Karl Müllers Arbeiten über Symbole lesen, dann wirst du wissen, was du suchst. Ich wüßte aber von vornherein überhaupt nicht, wer der Karl Müller ist, daß er über Symbole geschrieben hat, etc., aber das System kann mir das liefern.
    Da braucht also der Mensch, der sich dafür interessiert, solche Antworten zu finden, nicht erst indirekt über den Karl Müller, den er auf irgendeiner Karteikarte findet, dort hineinzugehen, sondern durch direktes Ansprechen der semantischen Struktur seines Problems, sich mit der semantischen Struktur des Systems in Verbindung setzen, das mir dann weiterhilft in diejenigen Bereiche zu gehen, in denen ich dann vielleicht Antworten für meine Probleme finde. Also mit solchen und ähnlichen Gedanken haben wir uns beschäftigt, und Paul Weston hat hervorragende Arbeiten dazu geschrieben, der hat durch diese Sache durchgeschaut. Der Projektvorschlag, den ich heute noch habe, für dieses unerhörte Riesenprojekt, das waren mehrere Millionen Dollar, wurde überhaupt nicht verstanden. Das brauchen wir nicht, wir haben ja die Bücher, wir haben ja die Karteikarten. Da waren eben Schwierigkeiten, wo mir meine Freunde richtig vorwerfen, Heinz, du hast unseren Fall nicht richtig vorgetragen, sodaß die Leute, die imstande gewesen wären, uns finanziell zu unterstützen, nicht verstanden haben, wovon du redest. Trotz meiner intensiven Bemühungen ist es in vielen Fällen nicht gelungen, eine Überzeugung, ein Verständnis zu erreichen. Mein Gefühl damals war, daß das Verständnis einfach blockiert war, weil schon bestimmte Verständnisdirektionen so festgefroren waren. Um etwas zu erreichen, hätte man viel mehr Zeit gebraucht und vielmehr miteinander sprechen sollen, um ein Verständnis durchzusetzen.
    Date
    10. 9.2006 17:22:54
  8. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.023620725 = product of:
      0.05905181 = sum of:
        0.034336135 = product of:
          0.06867227 = sum of:
            0.06867227 = weight(_text_:problems in 5865) [ClassicSimilarity], result of:
              0.06867227 = score(doc=5865,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4560259 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.04943135 = score(doc=5865,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  9. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.02
    0.023477197 = product of:
      0.11738598 = sum of:
        0.11738598 = sum of:
          0.082784034 = weight(_text_:etc in 4324) [ClassicSimilarity], result of:
            0.082784034 = score(doc=4324,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.41891038 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.034601945 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.034601945 = score(doc=4324,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.2 = coord(1/5)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  10. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.02
    0.023067001 = product of:
      0.115335 = sum of:
        0.115335 = product of:
          0.23067 = sum of:
            0.23067 = weight(_text_:exercises in 1075) [ClassicSimilarity], result of:
              0.23067 = score(doc=1075,freq=4.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.88899 = fieldWeight in 1075, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1075)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  11. Herwijnen, E. van: SGML tutorial (1993) 0.02
    0.020388542 = product of:
      0.1019427 = sum of:
        0.1019427 = product of:
          0.2038854 = sum of:
            0.2038854 = weight(_text_:exercises in 8747) [ClassicSimilarity], result of:
              0.2038854 = score(doc=8747,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.78576356 = fieldWeight in 8747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8747)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Contains extensive beginning and advanced interactive tutorials and exercises to teach SGML and uses DynaText software to manage, browse and search the text, thus demonstrating the features of one of the most widely known programs available for SGML marked-up text
  12. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.02
    0.019986972 = product of:
      0.04996743 = sum of:
        0.01030084 = product of:
          0.02060168 = sum of:
            0.02060168 = weight(_text_:problems in 1195) [ClassicSimilarity], result of:
              0.02060168 = score(doc=1195,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.13680777 = fieldWeight in 1195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
          0.5 = coord(1/2)
        0.03966659 = product of:
          0.07933318 = sum of:
            0.07933318 = weight(_text_:etc in 1195) [ClassicSimilarity], result of:
              0.07933318 = score(doc=1195,freq=10.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.40144807 = fieldWeight in 1195, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  13. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.02
    0.018693518 = product of:
      0.046733793 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 1176) [ClassicSimilarity], result of:
              0.034336135 = score(doc=1176,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 1176) [ClassicSimilarity], result of:
              0.05913145 = score(doc=1176,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  14. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.016629452 = product of:
      0.04157363 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 1967) [ClassicSimilarity], result of:
              0.04120336 = score(doc=1967,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 1967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
        0.020971946 = product of:
          0.041943893 = sum of:
            0.041943893 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.041943893 = score(doc=1967,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  15. Bryan, K.; Leise, T.: ¬The $25.000.000.000 eigenvector : the linear algebra behind Google 0.01
    0.014271979 = product of:
      0.071359895 = sum of:
        0.071359895 = product of:
          0.14271979 = sum of:
            0.14271979 = weight(_text_:exercises in 1353) [ClassicSimilarity], result of:
              0.14271979 = score(doc=1353,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5500345 = fieldWeight in 1353, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1353)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Google's success derives in large part from its PageRank algorithm, which ranks the importance of webpages according to an eigenvector of a weighted link matrix. Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course. Instructors may assign this article as a project to more advanced students, or spend one or two lectures presenting the material with assigned homework from the exercises. This material also complements the discussion of Markov chains in matrix algebra. Maple and Mathematica files supporting this material can be found at www.rose-hulman.edu/~bryan.
  16. Zhang, A.: Multimedia file formats on the Internet : a beginner's guide for PC users (1995) 0.01
    0.014191548 = product of:
      0.07095774 = sum of:
        0.07095774 = product of:
          0.14191549 = sum of:
            0.14191549 = weight(_text_:etc in 3212) [ClassicSimilarity], result of:
              0.14191549 = score(doc=3212,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.7181321 = fieldWeight in 3212, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3212)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Darstellung der verschiedenen Dateiformate, wie sie im Internet verwendet werden sowie die Möglichkeiten, die Dateien zu nutzen (einschl. Angaben zu Software etc.)
  17. Goldberga, A.: Synergy towards shared standards for ALM : Latvian scenario (2008) 0.01
    0.014172435 = product of:
      0.035431087 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 2322) [ClassicSimilarity], result of:
              0.04120336 = score(doc=2322,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 2322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 2322) [ClassicSimilarity], result of:
              0.02965881 = score(doc=2322,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 2322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The report reflects the Latvian scenario in co-operation for standardization of memory institutions. Differences and problems as well as benefits and possible solutions, tasks and activities of Standardization Technical Committee for Archives, Libraries and Museums Work (MABSTK) are analysed. Map of standards as a vision for ALM collaboration in standardization and "Digitizer's Handbook" (translated in English) prepared by the Competence Centre for Digitization of the National Library of Latvia (NLL) are presented. Shortcut to building the National Digital Library Letonica and its digital architecture (with pilot project about the Latvian composer Jazeps Vitols and the digital collection of expresident of Latvia Vaira Vike-Freiberga) reflects the practical co-operation between different players.
    Date
    26.12.2011 13:33:22
  18. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.014172435 = product of:
      0.035431087 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 4820) [ClassicSimilarity], result of:
              0.04120336 = score(doc=4820,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.02965881 = score(doc=4820,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  19. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.014172435 = product of:
      0.035431087 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 479) [ClassicSimilarity], result of:
              0.04120336 = score(doc=479,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.02965881 = score(doc=479,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Anthropological inquiry suggests that all societies classify animals and plants in similar ways. Paradoxically, in the same cultures that have seen large advances in biological science, citizenry's practical knowledge of nature has dramatically diminished. Here we describe historical, cross-cultural and developmental research on how people ordinarily conceptualize organic nature (folkbiology), concentrating on cognitive consequences associated with knowledge devolution. We show that results on psychological studies of categorization and reasoning from "standard populations" fail to generalize to humanity at large. Usual populations (Euro-American college students) have impoverished experience with nature, which yields misleading results about knowledge acquisition and the ontogenetic relationship between folkbiology and folkpsychology. We also show that groups living in the same habitat can manifest strikingly distinct behaviors, cognitions and social relations relative to it. This has novel implications for environmental decision making and management, including commons problems.
    Date
    23. 1.2022 10:22:18
  20. Internet Adressen : die 'Gelben Seiten' für das Internet (1996) 0.01
    0.011826291 = product of:
      0.05913145 = sum of:
        0.05913145 = product of:
          0.1182629 = sum of:
            0.1182629 = weight(_text_:etc in 4469) [ClassicSimilarity], result of:
              0.1182629 = score(doc=4469,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5984434 = fieldWeight in 4469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4469)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Das 'Telefonbuch' des Internet: alle wichtigen Pages des WWW für Sie zum schnellen Nachschlagen. Perfekt sortiert und übersichtlich von A-Z aufgelistet. Ein unerschöpflicher Fundus für Ihre Recherchen in professionellen Datenbanken und Uni-Bibliotheken, etc.

Authors

Years

Languages

  • e 167
  • d 98
  • el 2
  • a 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 131
  • i 13
  • r 7
  • x 7
  • m 6
  • s 5
  • p 3
  • b 2
  • n 1
  • More… Less…