Search (150 results, page 1 of 8)

  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.0074971514 = product of:
      0.029988606 = sum of:
        0.02024465 = product of:
          0.06073395 = sum of:
            0.06073395 = weight(_text_:problem in 759) [ClassicSimilarity], result of:
              0.06073395 = score(doc=759,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46424055 = fieldWeight in 759, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
        0.009743956 = product of:
          0.029231867 = sum of:
            0.029231867 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.029231867 = score(doc=759,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  2. Nicholson, D.: High-Level Thesaurus (HILT) project : interoperability and cross-searching distributed services (200?) 0.01
    0.006899295 = product of:
      0.02759718 = sum of:
        0.016360147 = product of:
          0.04908044 = sum of:
            0.04908044 = weight(_text_:problem in 5966) [ClassicSimilarity], result of:
              0.04908044 = score(doc=5966,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.375163 = fieldWeight in 5966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5966)
          0.33333334 = coord(1/3)
        0.011237033 = product of:
          0.033711098 = sum of:
            0.033711098 = weight(_text_:29 in 5966) [ClassicSimilarity], result of:
              0.033711098 = score(doc=5966,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 5966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5966)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    My presentation is about the HILT, High Level Thesaurus Project, which is looking, very roughly speaking, at how we might deal with interoperability problems relating to cross-searching distributed services by subject. The aims of HILT are to study and report on the problem of cross-searching and browsing by subject across a range of communities, services, and service or resource types in the UK given the wide range of subject schemes and associated practices in place
    Date
    13. 4.2008 12:29:16
  3. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.01
    0.006874024 = product of:
      0.027496096 = sum of:
        0.016360147 = product of:
          0.04908044 = sum of:
            0.04908044 = weight(_text_:problem in 7411) [ClassicSimilarity], result of:
              0.04908044 = score(doc=7411,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.375163 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.33333334 = coord(1/3)
        0.01113595 = product of:
          0.03340785 = sum of:
            0.03340785 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.03340785 = score(doc=7411,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  4. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.01
    0.006874024 = product of:
      0.027496096 = sum of:
        0.016360147 = product of:
          0.04908044 = sum of:
            0.04908044 = weight(_text_:problem in 2227) [ClassicSimilarity], result of:
              0.04908044 = score(doc=2227,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.375163 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.33333334 = coord(1/3)
        0.01113595 = product of:
          0.03340785 = sum of:
            0.03340785 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.03340785 = score(doc=2227,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
  5. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.01
    0.006014771 = product of:
      0.024059083 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 4324) [ClassicSimilarity], result of:
              0.042945385 = score(doc=4324,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.33333334 = coord(1/3)
        0.009743956 = product of:
          0.029231867 = sum of:
            0.029231867 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
              0.029231867 = score(doc=4324,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2708308 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  6. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.01
    0.0053709024 = product of:
      0.02148361 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 4647) [ClassicSimilarity], result of:
              0.04338139 = score(doc=4647,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 4647, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.33333334 = coord(1/3)
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 4647) [ClassicSimilarity], result of:
              0.021069437 = score(doc=4647,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 4647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
    Date
    29. 7.2011 14:44:56
  7. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.00
    0.0041949344 = product of:
      0.033559475 = sum of:
        0.033559475 = product of:
          0.05033921 = sum of:
            0.025283325 = weight(_text_:29 in 1289) [ClassicSimilarity], result of:
              0.025283325 = score(doc=1289,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
            0.025055885 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.025055885 = score(doc=1289,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
    Date
    20. 1.2008 17:28:29
  8. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.00
    0.004111523 = product of:
      0.016446091 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 5988) [ClassicSimilarity], result of:
              0.03681033 = score(doc=5988,freq=8.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 5988, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.33333334 = coord(1/3)
        0.004175981 = product of:
          0.012527943 = sum of:
            0.012527943 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.012527943 = score(doc=5988,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    Darin: ... "Eine Idee, mit der wir uns beschäftigt haben, kommt jetzt langsam wirklich heraus. Wie rechnet man innerhalb einer semantischen Struktur? Wir haben das so gesehen, daß jedes Wort, jeder Begriff so ausschaut wie ein vielfältiges Element, das nach allen Richtungen seine Konnektivitäten ausstreckt und mit anderen solchen vielfältigen Elementen in Verbindung bringt. Und die Operationen bestehen darin, neue Verbindungen zu finden, die grammatisch kontrolliert werden und als Sprache herauskommen, aber konzeptuell konnektiert, so daß sie verbunden sind durch eine semantische interne Struktur. Das heißt, jeder Begriff ist für uns ein vielfältiger Rechner, der sich mit anderen Rechnern in Verbindung setzt. Damals hat das niemand verstanden, vielleicht habe ich es auch nicht gut dargestellt. Aber heute taucht das überall auf, semantic computation, mit lauter parallelen Maschinen, die alle gleichzeitig arbeiten und ihre Verbindungen herstellen. Unser Problem war damals schon: könnte man irgendetwas machen, um in natürlichen Sprachen mit einer Maschine sprechen zu können.
    Noch einige Schritte weiter zurück. Oft haben mich Bibliothekare angesprochen, wie sollte man eine Bibliothek aufbauen? Wir schauen, sagten sie, in eine Bibliothek so hinein, als wäre sie wie ein Gedächtnis. "Das ist schön, aber wissen Sie, wie das Gedächtnis funktioniert? "Nein, aber viele Leute sagen, das Gedächtnis arbeitet wie eine große Bibliothek. Man muß nur hineingreifen und das richtige Buch finden. "Das ist alles wunderschön und sehr lieb, aber wissen Sie, die Leute, die ein Buch suchen, suchen es ja nur, weil sie ein Problem haben und hoffen, in dem Buch die Antwort für das Problem zu finden. Das Buch ist nur ein Zwischenträger von einer Frage und einer vielleicht in dem Buch zu findenden Antwort. Aber das Buch ist nicht die Antwort. "Aha, wie stellen Sie sich das vor? Wir sollten das Problem so sehen, daß die Inhalte der Bücher, die semantische Struktur - wenn man jetzt diesen Ausdruck wieder verwenden möchte - dieser Bücher in einem System sitzt, sodaß ich in diese semantische Struktur mit meiner Frage einsteigen kann, und mir die semantische Struktur dieses Systems sagt, dann mußt du Karl Müllers Arbeiten über Symbole lesen, dann wirst du wissen, was du suchst. Ich wüßte aber von vornherein überhaupt nicht, wer der Karl Müller ist, daß er über Symbole geschrieben hat, etc., aber das System kann mir das liefern.
    Date
    10. 9.2006 17:22:54
  9. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.00
    0.0040900367 = product of:
      0.032720294 = sum of:
        0.032720294 = product of:
          0.09816088 = sum of:
            0.09816088 = weight(_text_:problem in 3979) [ClassicSimilarity], result of:
              0.09816088 = score(doc=3979,freq=8.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.750326 = fieldWeight in 3979, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  10. Tunkelang, D.: Dynamic category sets : an approach for faceted search (2006) 0.00
    0.003578782 = product of:
      0.028630257 = sum of:
        0.028630257 = product of:
          0.08589077 = sum of:
            0.08589077 = weight(_text_:problem in 3082) [ClassicSimilarity], result of:
              0.08589077 = score(doc=3082,freq=8.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6565352 = fieldWeight in 3082, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3082)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we present Dynamic Category Sets, a novel approach that addresses the vocabulary problem for faceted data. In their paper on the vocabulary problem, Furnas et al. note that "the keywords that are assigned by indexers are often at odds with those tried by searchers." Faceted search systems exhibit an interesting aspect of this problem: users do not necessarily understand an information space in terms of the same facets as the indexers who designed it. Our approach addresses this problem by employing a data-driven approach to discover sets of values across multiple facets that best match the query. When there are multiple candidates, we offer a clarification dialog that allows the user to disambiguate them.
  11. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.00
    0.003437012 = product of:
      0.013748048 = sum of:
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 1163) [ClassicSimilarity], result of:
              0.02454022 = score(doc=1163,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.33333334 = coord(1/3)
        0.005567975 = product of:
          0.016703924 = sum of:
            0.016703924 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.016703924 = score(doc=1163,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  12. Panzer, M.: Designing identifiers for the DDC (2007) 0.00
    0.0033879164 = product of:
      0.027103331 = sum of:
        0.027103331 = product of:
          0.040654995 = sum of:
            0.012641663 = weight(_text_:29 in 1752) [ClassicSimilarity], result of:
              0.012641663 = score(doc=1752,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.11659596 = fieldWeight in 1752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
            0.028013334 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
              0.028013334 = score(doc=1752,freq=10.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2595412 = fieldWeight in 1752, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Content
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
    Date
    21. 3.2008 19:29:28
  13. Wake, S.; Nicholson, D.: HILT: High-Level Thesaurus Project : building consensus for interoperable subject access across communities (2001) 0.00
    0.0030675277 = product of:
      0.024540221 = sum of:
        0.024540221 = product of:
          0.07362066 = sum of:
            0.07362066 = weight(_text_:problem in 1224) [ClassicSimilarity], result of:
              0.07362066 = score(doc=1224,freq=18.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5627445 = fieldWeight in 1224, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1224)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    This article provides an overview of the work carried out by the HILT Project <http://hilt.cdlr.strath.ac.uk> in making recommendations towards interoperable subject access, or cross-searching and browsing distributed services amongst the archives, libraries, museums and electronic services sectors. The article details consensus achieved at the 19 June 2001 HILT Workshop and discusses the HILT Stakeholder Survey. In 1999 Péter Jascó wrote that "savvy searchers" are asking for direction. Three years later the scenario he describes, that of searchers cross-searching databases where the subject vocabulary used in each case is different, still rings true. Jascó states that, in many cases, databases do not offer the necessary aids required to use the "preferred terms of the subject-controlled vocabulary". The databases to which Jascó refers are Dialog and DataStar. However, the situation he describes applies as well to the area that HILT is researching: that of cross-searching and browsing by subject across databases and catalogues in archives, libraries, museums and online information services. So how does a user access information on a particular subject when it is indexed across a multitude of services under different, but quite often similar, subject terms? Also, if experienced searchers are having problems, what about novice searchers? As information professionals, it is our role to investigate such problems and recommend solutions. Although there is no hard empirical evidence one way or another, HILT participants agree that the problem for users attempting to search across databases is real. There is a strong likelihood that users are disadvantaged by the use of different subject terminology combined with a multitude of different practices taking place within the archive, library, museums and online communities. Arguably, failure to address this problem of interoperability undermines the value of cross-searching and browsing facilities, and wastes public money because relevant resources are 'hidden' from searchers. HILT is charged with analysing this broad problem through qualitative methods, with the main aim of presenting a set of recommendations on how to make it easier to cross-search and browse distributed services. Because this is a very large problem composed of many strands, HILT recognizes that any proposed solutions must address a host of issues. Recommended solutions must be affordable, sustainable, politically acceptable, useful, future-proof and international in scope. It also became clear to the HILT team that progress toward finding solutions to the interoperability problem could only be achieved through direct dialogue with other parties keen to solve this problem, and that the problem was as much about consensus building as it was about finding a solution. This article describes how HILT approached the cross-searching problem; how it investigated the nature of the problem, detailing results from the HILT Stakeholder Survey; and how it achieved consensus through the recent HILT Workshop.
  14. Facet analytical theory for managing knowledge structure in the humanities : FATKS (2003) 0.00
    0.0028092582 = product of:
      0.022474065 = sum of:
        0.022474065 = product of:
          0.067422196 = sum of:
            0.067422196 = weight(_text_:29 in 2526) [ClassicSimilarity], result of:
              0.067422196 = score(doc=2526,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6218451 = fieldWeight in 2526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2526)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 8.2004 9:17:18
  15. ws: ¬Das Große Wissen.de Lexikon 2004 (2003) 0.00
    0.0027966227 = product of:
      0.022372982 = sum of:
        0.022372982 = product of:
          0.03355947 = sum of:
            0.016855549 = weight(_text_:29 in 1079) [ClassicSimilarity], result of:
              0.016855549 = score(doc=1079,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15546128 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1079)
            0.016703924 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.016703924 = score(doc=1079,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1079)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    20. 3.2004 12:58:22
    Footnote
    Rez. u.d.T. "Die Welt ist eine Scheibe" in: CD-Info. 2004, H.1, S.29 (ws): "Das Lexikon entspricht mit seinen 117.000 Stichworten vom Umfang etwa einem ca. 24-bändigen gedruckten Lexikon und vereint aktuelle Inhalte mit einer Vielzahl von Multimedia-Elementen wie Tondokumenten, Bildern und Videos. Dank ausgeklügelter Suchfunktionen, einem Online Update-Service und ergänzenden Links ins Internet, ist das Lexikon sowohl zum Nachschlagen als auch zum Stöbern geeignet. Neben dem Lexikon enthält die DVD noch ein Fremdwörterlexikon, ein viersprachiges Wörterbuch (E, F, I, E) sowie einen aktuellen Weltatlas. Mit Hilfe der übersichtlichen Benutzeroberfläche stehen dem Benutzer mehrere Einstiegsmöglichkeiten zur Verfügung: "Wissen A - Z" beinhaltet eine Stichwort- und Volltextsuche, "Timeline" liefert die Geschichte der Menschheit von den alten Ägyptern bis zum Fall Bagdads auf einem Zeitstrahl. "Themenreisen" stellt besondere Themengebiete wie beispielsweise "Aufstieg und Fall der Sowjetunion" kompakt mit allen zugehörigen Lexika-Einträgen und Internet-Links dar. Und in der "Mediengalerie" erschließen sich dem Benutzer die über 16.000 enthaltenen Medienelemente übersichtlich sortiert nach Themengebiet oder Medientyp."
  16. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.00
    0.0027235185 = product of:
      0.021788148 = sum of:
        0.021788148 = product of:
          0.06536444 = sum of:
            0.06536444 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.06536444 = score(doc=1936,freq=10.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  17. Frerichs, S.: Grundlagen des erkenntnistheoretischen Konstruktivismus : eine allgemein verständliche Einführung für Laien (2000) 0.00
    0.002483057 = product of:
      0.019864457 = sum of:
        0.019864457 = product of:
          0.05959337 = sum of:
            0.05959337 = weight(_text_:29 in 4395) [ClassicSimilarity], result of:
              0.05959337 = score(doc=4395,freq=4.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5496386 = fieldWeight in 4395, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4395)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    9. 8.2018 11:29:29
  18. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.00
    0.0024607205 = product of:
      0.019685764 = sum of:
        0.019685764 = product of:
          0.059057288 = sum of:
            0.059057288 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.059057288 = score(doc=3925,freq=4.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 7.2006 15:22:28
  19. Hinkelmann, K.: Ontopia Omnigator : ein Werkzeug zur Einführung in Topic Maps (20xx) 0.00
    0.0024581011 = product of:
      0.01966481 = sum of:
        0.01966481 = product of:
          0.058994424 = sum of:
            0.058994424 = weight(_text_:29 in 3162) [ClassicSimilarity], result of:
              0.058994424 = score(doc=3162,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5441145 = fieldWeight in 3162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3162)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    4. 9.2011 12:29:09
  20. Remler, A.: Lässt sich wissenschaftliche Leistung messen? : Wer zitiert wird, liegt vorne - in den USA berechnet man Forschungsleistung nach einem Zitat-Index (2000) 0.00
    0.0024581011 = product of:
      0.01966481 = sum of:
        0.01966481 = product of:
          0.058994424 = sum of:
            0.058994424 = weight(_text_:29 in 5392) [ClassicSimilarity], result of:
              0.058994424 = score(doc=5392,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5441145 = fieldWeight in 5392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5392)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    30.10.2000 17:47:29

Languages

  • e 110
  • d 38
  • el 2
  • More… Less…

Types

  • a 43
  • i 8
  • m 2
  • n 1
  • p 1
  • r 1
  • s 1
  • x 1
  • More… Less…