Search (1916 results, page 1 of 96)

  • × year_i:[2000 TO 2010}
  1. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.14
    0.13584822 = sum of:
      0.020753428 = product of:
        0.08301371 = sum of:
          0.08301371 = weight(_text_:authors in 5276) [ClassicSimilarity], result of:
            0.08301371 = score(doc=5276,freq=4.0), product of:
              0.23308155 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051127672 = queryNorm
              0.35615736 = fieldWeight in 5276, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5276)
        0.25 = coord(1/4)
      0.115094796 = sum of:
        0.08045933 = weight(_text_:z in 5276) [ClassicSimilarity], result of:
          0.08045933 = score(doc=5276,freq=2.0), product of:
            0.2728844 = queryWeight, product of:
              5.337313 = idf(docFreq=577, maxDocs=44218)
              0.051127672 = queryNorm
            0.29484767 = fieldWeight in 5276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.337313 = idf(docFreq=577, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.03463547 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
          0.03463547 = score(doc=5276,freq=2.0), product of:
            0.1790404 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051127672 = queryNorm
            0.19345059 = fieldWeight in 5276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
  2. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.13
    0.13006514 = product of:
      0.2601303 = sum of:
        0.2601303 = product of:
          0.5202606 = sum of:
            0.20301075 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.20301075 = score(doc=692,freq=2.0), product of:
                0.43346098 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051127672 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.3172498 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.3172498 = score(doc=692,freq=2.0), product of:
                0.5418651 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.051127672 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  3. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.10
    0.096630186 = sum of:
      0.07931245 = product of:
        0.3172498 = sum of:
          0.3172498 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.3172498 = score(doc=193,freq=2.0), product of:
              0.5418651 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.051127672 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.017317735 = product of:
        0.03463547 = sum of:
          0.03463547 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.03463547 = score(doc=193,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  4. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.0816845 = sum of:
      0.06090322 = product of:
        0.24361289 = sum of:
          0.24361289 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24361289 = score(doc=562,freq=2.0), product of:
              0.43346098 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051127672 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.02078128 = product of:
        0.04156256 = sum of:
          0.04156256 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04156256 = score(doc=562,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  5. Sireteanu, R.: "Dumpfes Sehen" verändert das Gehirn (2000) 0.08
    0.08056636 = product of:
      0.16113272 = sum of:
        0.16113272 = sum of:
          0.11264306 = weight(_text_:z in 5573) [ClassicSimilarity], result of:
            0.11264306 = score(doc=5573,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41278675 = fieldWeight in 5573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5573)
          0.048489656 = weight(_text_:22 in 5573) [ClassicSimilarity], result of:
            0.048489656 = score(doc=5573,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.2708308 = fieldWeight in 5573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5573)
      0.5 = coord(1/2)
    
    Abstract
    Einer der faszinierendsten Aspekte der menschlichen visuellen Wahrnehmung ist die Fähigkeit, die Bilder der beiden Augen zu einem einheitlichen, räumlichen Wahrnehmungseindruck zu verschmelzen - einem Eindruck, bei dem Gegenstände sich plastisch von ihrem Hintergrund "abheben". Diese scheinbar mühelose Leistung erfordert ein einwandfreies visuelles Angebot in früher Kindheit. Störungen der Sehfähigkeit bei Babys und Kleinkindern, z. B. durch ungleiche Brechkraft der beiden Augen (Anisometropie), lichtundurchlässige Augenmedien aufgrund hängender Augenlider (Ptosis) oder angeborene Linsentrübung (Katarakt) können zu einem irreversiblen Verlust der Sehkraft auf einem oder beiden Augen führen. Fachleute sprechen in diesem Fall von Amblyopie, was soviel bedeutet wie "dumpfes Sehen"
    Source
    Max Planck Forschung. 2000, H.1, S.22-25
  6. Khurshid, Z.: ¬The impact of information technology an job requirements and qualifications for catalogers (2003) 0.08
    0.08056636 = product of:
      0.16113272 = sum of:
        0.16113272 = sum of:
          0.11264306 = weight(_text_:z in 2323) [ClassicSimilarity], result of:
            0.11264306 = score(doc=2323,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41278675 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2323)
          0.048489656 = weight(_text_:22 in 2323) [ClassicSimilarity], result of:
            0.048489656 = score(doc=2323,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.2708308 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2323)
      0.5 = coord(1/2)
    
    Source
    Information technology and libraries. 22(2003) no. March, S.18-21
  7. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.08
    0.08056636 = product of:
      0.16113272 = sum of:
        0.16113272 = sum of:
          0.11264306 = weight(_text_:z in 3276) [ClassicSimilarity], result of:
            0.11264306 = score(doc=3276,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41278675 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
          0.048489656 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
            0.048489656 = score(doc=3276,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.2708308 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
      0.5 = coord(1/2)
    
    Date
    20. 3.2005 16:23:22
  8. Wagner-Döbler, R.: Umberto Ecos Betrachtung einer benützerfeindlichen Bibliothek : 25 Jahre danach (2006) 0.08
    0.08056636 = product of:
      0.16113272 = sum of:
        0.16113272 = sum of:
          0.11264306 = weight(_text_:z in 29) [ClassicSimilarity], result of:
            0.11264306 = score(doc=29,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41278675 = fieldWeight in 29, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0546875 = fieldNorm(doc=29)
          0.048489656 = weight(_text_:22 in 29) [ClassicSimilarity], result of:
            0.048489656 = score(doc=29,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.2708308 = fieldWeight in 29, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=29)
      0.5 = coord(1/2)
    
    Abstract
    Vor rund 25 Jahren entwarf Umberto Eco anlässlich des Jubiläums einer italienischen Stadtbibliothek die negative Utopie einer benützerfeindlichen Bibliothek, an deren hervorstechendste Eigenschaften im vorliegenden Beitrag erinnert wird. Vieles damals teils als Desiderat, teils fast als Utopie Empfundene ist heute erreicht, ja Selbstverständlichkeit geworden. Überträgt man Ecos Betrachtungsweise auf einige Aspekte der Benützerfreundlichkeit heutiger digitaler Bibliotheken und Informationsressourcen, wie z. B. Navigierbarkeit und Zugänglichkeit, lassen sich auch hier noch, dem Erreichten zum Trotz, Desiderate und Raumfür Utopien aufzeigen.
    Date
    27.10.2006 14:22:06
  9. Jiang, T.: Architektur und Anwendungen des kollaborativen Lernsystems K3 (2008) 0.08
    0.08056636 = product of:
      0.16113272 = sum of:
        0.16113272 = sum of:
          0.11264306 = weight(_text_:z in 1391) [ClassicSimilarity], result of:
            0.11264306 = score(doc=1391,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41278675 = fieldWeight in 1391, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1391)
          0.048489656 = weight(_text_:22 in 1391) [ClassicSimilarity], result of:
            0.048489656 = score(doc=1391,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.2708308 = fieldWeight in 1391, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1391)
      0.5 = coord(1/2)
    
    Abstract
    Die K3-Architektur zur technischen Entwicklung und Realisierung des netzbasierten Wissensmanagement in der Lehre wird dargestellt. Das aktuelle K3-System besteht aus drei zentralen Komponenten; K3Forum (Diskurs), K3Vis (Visualisierung) und K3Wiki (kollaborative Textproduktion, z. B. für Zusammenfassungen). K3 verwendet Open-Source-Software unter der LGPL Lizenz.. Dadurch können freie Verwendung, überschaubare Entwicklungskosten und Nachhaltigkeit garantiert und die Unabhängigkeit von kommerziellen Software-Anbietern gesichert werden. Dank des komponentenbasierten Entwicklungskonzepts kann K3 flexibel und robust laufend weiterentwickelt werden, ohne die Stabilität der bestehenden Funktionalitäten zu beeinträchtigen. Der Artikel dokumentiert exemplarisch die Hauptkomponenten und Funktionen von K3, so dass nachfolgende Entwickler leicht eine Übersicht über das K3-System gewinnen können. Die Anforderungen an den Transfer des Systems in Umgebungen außerhalb von Konstanz werden beschrieben.
    Date
    10. 2.2008 14:22:00
  10. Verlic, Z.; Repinc, U.: Informacijsko vedenje z iskalno strategijo (2000) 0.08
    0.07965067 = product of:
      0.15930134 = sum of:
        0.15930134 = product of:
          0.31860268 = sum of:
            0.31860268 = weight(_text_:z in 7863) [ClassicSimilarity], result of:
              0.31860268 = score(doc=7863,freq=4.0), product of:
                0.2728844 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.051127672 = queryNorm
                1.1675372 = fieldWeight in 7863, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7863)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.08
    0.07822166 = product of:
      0.15644331 = sum of:
        0.15644331 = sum of:
          0.12873493 = weight(_text_:z in 3284) [ClassicSimilarity], result of:
            0.12873493 = score(doc=3284,freq=8.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.47175628 = fieldWeight in 3284, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.027708376 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.027708376 = score(doc=3284,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.5 = coord(1/2)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Date
    22. 1.2010 14:41:24
  12. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.07
    0.074211076 = product of:
      0.14842215 = sum of:
        0.14842215 = sum of:
          0.113786675 = weight(_text_:z in 1397) [ClassicSimilarity], result of:
            0.113786675 = score(doc=1397,freq=4.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.41697758 = fieldWeight in 1397, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
          0.03463547 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
            0.03463547 = score(doc=1397,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.19345059 = fieldWeight in 1397, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
      0.5 = coord(1/2)
    
    Classification
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
    Date
    22. 3.2008 14:29:25
    RVK
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
  13. Neuroth, H.; Lepschy, P.: ¬Das EU-Projekt Renardus (2001) 0.07
    0.06905688 = product of:
      0.13811377 = sum of:
        0.13811377 = sum of:
          0.0965512 = weight(_text_:z in 5589) [ClassicSimilarity], result of:
            0.0965512 = score(doc=5589,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.04156256 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.04156256 = score(doc=5589,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.5 = coord(1/2)
    
    Abstract
    Der vollständige Projektname von Renardus lautet "Academic Subject Gateway Service Europe". Renardus wird von der Europäischen Union im 5. Rahmenprogramm mit dem Schwerpunktthema "Information Society Technologies" im zweiten Thematischen Programm "Benutzerfreundliche Informationsgesellschaft" ('Promoting a User-friendly Information Society') gefördert. Die Projektlaufzeit ist von Januar 2000 bis Juni 2002. Insgesamt zwölf Partner (Principal und Assistant Contractors) aus Finnland, Dänemark, Schweden, Großbritannien, den Niederlanden, Frankreich und Deutschland beteiligen sich an diesem Projekt. Die Europäische Union unterstützt das Projekt mit 1,7 Mio. EURO, die Gesamtkosten belaufen sich inklusive der Eigenbeteiligungen der Partner auf 2,3 Mio. EURO. Das Ziel des Projektes Renardus ist es, über eine Schnittstelle Zugriff auf verteilte Sammlungen von "High Quality" Internet Ressourcen in Europa zu ermöglichen. Diese Schnittstelle wird über den Renardus Broker realisiert, der das "Cross-Searchen" und "Cross-Browsen" über verteilte "Quality-Controlled Subject Gateways" ermöglicht. Ein weiteres Ziel von Renardus ist es, Möglichkeiten von "metadata sharing" zu evaluieren und in kleinen Experimenten zwischen z. B. Subject Gateways und Nationalbibliothek zu testen bzw. zu realisieren
    Date
    22. 6.2002 19:32:15
  14. Staatsbibliothek zu Berlin: Datenbank "Gesamtkatalog der Wiegendrucke" online (2003) 0.07
    0.06905688 = product of:
      0.13811377 = sum of:
        0.13811377 = sum of:
          0.0965512 = weight(_text_:z in 1933) [ClassicSimilarity], result of:
            0.0965512 = score(doc=1933,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 1933, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=1933)
          0.04156256 = weight(_text_:22 in 1933) [ClassicSimilarity], result of:
            0.04156256 = score(doc=1933,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 1933, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1933)
      0.5 = coord(1/2)
    
    Abstract
    Die kostenfrei zugängliche Datenbank "Gesamtkatalog der Wiegendrucke" wurde am 20. August 2003 offiziell für die Fachwelt im Internet bereitgestellt. Ausgangspunkt für diese Datenbank ist die seit 1925 im Hiersemann Verlag in Einzelbänden erscheinende Druck-Version des "Gesamtkatalogs der Wiegendrucke", die weltweit sämtliche Drucke des 15. Jahrhunderts in alphabetischer Form nachweist. Der Gesamtkatalog der Wiegendrucke wird seit fast 100 Jahren in der Staatsbibliothek zu Berlin redaktionell bearbeitet. Bisher erschienen zehn Bände, die die Alphabetteile "A-H" umfassen. Dieses Material sowie die umfangreiche Materialsammlung der Redaktion, die nunmehr auch den Alphabetteil "I-Z" umfasst, wurde in den letzten Jahren mit Hilfe der Deutschen Forschungsgemeinschaft (DFG) elektronisch aufbereitet. Die Datenbank enthält unter anderem Angaben zum Umfang, zur Zeilenzahl, zu den Drucktypen sowie teilweise auch die Besitznachweise von Wiegendrucken. Anhand eines Verzeichnisses aller inkunabelbesitzenden Bibliotheken lassen sich die zum Teil spannenden Wege von Inkunabel-Sammlungen nachvollziehen. Die Suchmaschine ist keine übliche Web-Applikation. Sowohl Server als auch Klient laufen auf einem Applikations-Server in der Staatsbibliothek zu Berlin.
    Date
    21. 8.2004 18:42:22
  15. Keßler, M.: KIM - Kompetenzzentrum Interoperable Metadaten : Gemeinsamer Workshop der Deutschen Nationalbibliothek und des Arbeitskreises Elektronisches Publizieren (AKEP) (2007) 0.07
    0.06905688 = product of:
      0.13811377 = sum of:
        0.13811377 = sum of:
          0.0965512 = weight(_text_:z in 2406) [ClassicSimilarity], result of:
            0.0965512 = score(doc=2406,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 2406, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=2406)
          0.04156256 = weight(_text_:22 in 2406) [ClassicSimilarity], result of:
            0.04156256 = score(doc=2406,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 2406, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2406)
      0.5 = coord(1/2)
    
    Abstract
    Das Kompetenzzentrum Interoperable Metadaten (KIM) ist eine Informations- und Kommunikationsplattform für Metadatenanwender und -entwickler zur Verbesserung der Interoperabilität von Metadaten im deutschsprachigen Raum. KIM unterstützt und fördert die Erarbeitung von Metadatenstandards, die interoperable Gestaltung von Formaten und damit die optimale Nutzung von Metadaten in digitalen Informationsumgebungen mittels Lehrmaterialien, Schulungen und Beratungen. Das Kompetenzzentrum entsteht im Rahmen des von der Deutschen Forschungsgemeinschaft (DFG) geförderten Projekts KIM unter der Federführung der Niedersächsischen Staats- und Universitätsbibliothek Göttingen (SUB) in Zusammenarbeit mit der Deutschen Nationalbibliothek (DNB). Projektpartner sind in der Schweiz die Hochschule für Technik und Wirtschaft HTW Chur, die Eidgenössische Technische Hochschule (ETH) Zürich und in Österreich die Universität Wien. Aufgabe des Kompetenzzentrums ist es, die Interoperabilität von Metadaten zu verbessern. Interoperabilität ist die Fähigkeit zur Zusammenarbeit von heterogenen Systemen. Datenbestände unabhängiger Systeme können ausgetauscht oder zusammengeführt werden, um z. B. eine systemübergreifende Suche oder das Browsen zu ermöglichen. Daten werden zum Teil in sehr unterschiedlichen Datenbanksystemen gespeichert. Interoperabilität entsteht dann, wenn die Systeme umfangreiche Schnittstellen implementieren, die ein weitgehend verlustfreies Mapping der internen Datenrepräsentation ermöglichen.
    Source
    Dialog mit Bibliotheken. 20(2008) H.1, S.22-24
  16. Genereux, C.: Building connections : a review of the serials literature 2004 through 2005 (2007) 0.07
    0.06905688 = product of:
      0.13811377 = sum of:
        0.13811377 = sum of:
          0.0965512 = weight(_text_:z in 2548) [ClassicSimilarity], result of:
            0.0965512 = score(doc=2548,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 2548, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=2548)
          0.04156256 = weight(_text_:22 in 2548) [ClassicSimilarity], result of:
            0.04156256 = score(doc=2548,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 2548, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2548)
      0.5 = coord(1/2)
    
    Abstract
    This review of 2004 and 2005 serials literature covers the themes of cost, management, and access. Interwoven through the serials literature of these two years are the importance of collaboration, communication, and linkages between scholars, publishers, subscription agents and other intermediaries, and librarians. The emphasis in the literature is on electronic serials and their impact on publishing, libraries, and vendors. In response to the crisis of escalating journal prices and libraries' dissatisfaction with the Big Deal licensing agreements, Open Access journals and publishing models were promoted. Libraries subscribed to or licensed increasing numbers of electronic serials. As a result, libraries sought ways to better manage licensing and subscription data (not handled by traditional integrated library systems) by implementing electronic resources management systems. In order to provide users with better, faster, and more current information on and access to electronic serials, libraries implemented tools and services to provide A-Z title lists, title by title coverage data, MARC records, and OpenURL link resolvers.
    Date
    10. 9.2000 17:38:22
  17. He, Z.-L.: International collaboration does not have greater epistemic authority (2009) 0.07
    0.06905688 = product of:
      0.13811377 = sum of:
        0.13811377 = sum of:
          0.0965512 = weight(_text_:z in 3122) [ClassicSimilarity], result of:
            0.0965512 = score(doc=3122,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 3122, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=3122)
          0.04156256 = weight(_text_:22 in 3122) [ClassicSimilarity], result of:
            0.04156256 = score(doc=3122,freq=2.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.23214069 = fieldWeight in 3122, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3122)
      0.5 = coord(1/2)
    
    Date
    26. 9.2009 11:22:05
  18. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.07
    0.06588547 = sum of:
      0.017609866 = product of:
        0.070439465 = sum of:
          0.070439465 = weight(_text_:authors in 4260) [ClassicSimilarity], result of:
            0.070439465 = score(doc=4260,freq=2.0), product of:
              0.23308155 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051127672 = queryNorm
              0.30220953 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.25 = coord(1/4)
      0.0482756 = product of:
        0.0965512 = sum of:
          0.0965512 = weight(_text_:z in 4260) [ClassicSimilarity], result of:
            0.0965512 = score(doc=4260,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.35381722 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.5 = coord(1/2)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  19. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.06
    0.06472064 = product of:
      0.12944128 = sum of:
        0.12944128 = sum of:
          0.08045933 = weight(_text_:z in 2541) [ClassicSimilarity], result of:
            0.08045933 = score(doc=2541,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.29484767 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.04898195 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.04898195 = score(doc=2541,freq=4.0), product of:
              0.1790404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051127672 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  20. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.06
    0.06098309 = sum of:
      0.020753428 = product of:
        0.08301371 = sum of:
          0.08301371 = weight(_text_:authors in 578) [ClassicSimilarity], result of:
            0.08301371 = score(doc=578,freq=4.0), product of:
              0.23308155 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051127672 = queryNorm
              0.35615736 = fieldWeight in 578, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.25 = coord(1/4)
      0.040229663 = product of:
        0.08045933 = sum of:
          0.08045933 = weight(_text_:z in 578) [ClassicSimilarity], result of:
            0.08045933 = score(doc=578,freq=2.0), product of:
              0.2728844 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.051127672 = queryNorm
              0.29484767 = fieldWeight in 578, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.5 = coord(1/2)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.

Languages

Types

  • a 1578
  • m 248
  • s 91
  • el 90
  • b 26
  • x 17
  • i 13
  • r 5
  • n 3
  • More… Less…

Themes

Subjects

Classifications