Search (10 results, page 1 of 1)

  • × type_ss:"r"
  • × year_i:[2000 TO 2010}
  1. Colomb, R.M.: Quality of ontologies in interoperating information systems (2002) 0.02
    0.021307886 = product of:
      0.04261577 = sum of:
        0.04261577 = product of:
          0.08523154 = sum of:
            0.08523154 = weight(_text_:systems in 7858) [ClassicSimilarity], result of:
              0.08523154 = score(doc=7858,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5314657 = fieldWeight in 7858, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this paper is an quality of ontologies as they relate to interoperating information systems. Quality is not a property of something but a judgment, so must be relative to some purpose, and generally involves recognition of design tradeoffs. Ontologies used for information systems interoperability have much in common with classification systems in information science, knowledge based systems, and programming languages, and inherit quality characteristics from each of these older areas. Factors peculiar to the new field lead to some additional characteristics relevant to quality, some of which are more profitably considered quality aspects not of the ontology as such, but of the environment through which the ontology is made available to its users. Suggestions are presented as to how to use these Factors in producing quality ontologies.
  2. Carey, K.; Stringer, R.: ¬The power of nine : a preliminary investigation into navigation strategies for the new library with special reference to disabled people (2000) 0.02
    0.021210661 = product of:
      0.042421322 = sum of:
        0.042421322 = product of:
          0.084842645 = sum of:
            0.084842645 = weight(_text_:22 in 234) [ClassicSimilarity], result of:
              0.084842645 = score(doc=234,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46428138 = fieldWeight in 234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    22 S
  3. Hodge, G.: Systems of knowledge organization for digital libraries : beyond traditional authority files (2000) 0.02
    0.0182639 = product of:
      0.0365278 = sum of:
        0.0365278 = product of:
          0.0730556 = sum of:
            0.0730556 = weight(_text_:systems in 4723) [ClassicSimilarity], result of:
              0.0730556 = score(doc=4723,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.45554203 = fieldWeight in 4723, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Access of digital materials continues to be an issue of great significance in the development of digital libraries. The proliferation of information in the networked digital environment poses challenges as well as opportunities. The author reports on a wide array of activities in the field. While this publication is not intended to be exhaustive, the reader will find, in a single work, an overview of systems of knowledge organization and pertinent examples of their application to digital materials
    Content
    (1) Knowledge organization systems: an overview; (2) Linking digital library resources to related resources; (3) Making resources accessible to other communities; (4) Planning and implementing knowledge organization systems in digital libraries; (5) The future of knowledge organization systems on the Web
  4. Hellweg, H.; Krause, J.; Mandl, T.; Marx, J.; Müller, M.N.O.; Mutschke, P.; Strötgen, R.: Treatment of semantic heterogeneity in information retrieval (2001) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 6560) [ClassicSimilarity], result of:
              0.043561947 = score(doc=6560,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 6560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Nowadays, users of information services are faced with highly decentralised, heterogeneous document sources with different content analysis. Semantic heterogeneity occurs e.g. when resources using different systems for content description are searched using a simple query system. This report describes several approaches of handling semantic heterogeneity used in projects of the German Social Science Information Centre
  5. Euzenat, J.; Bach, T.Le; Barrasa, J.; Bouquet, P.; Bo, J.De; Dieng, R.; Ehrig, M.; Hauswirth, M.; Jarrar, M.; Lara, R.; Maynard, D.; Napoli, A.; Stamou, G.; Stuckenschmidt, H.; Shvaiko, P.; Tessaris, S.; Acker, S. Van; Zaihrayeu, I.: State of the art on ontology alignment (2004) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 172) [ClassicSimilarity], result of:
              0.043561947 = score(doc=172,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 172, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this document we provide an overall view of the state of the art in ontology alignment. It is organised as a description of the need for ontology alignment, a presentation of the techniques currently in use for ontology alignment and a presentation of existing systems. The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance. Heterogeneity problems on the semantic web can be solved, for some of them, by aligning heterogeneous ontologies. This is illustrated through a number of use cases of ontology alignment. Aligning ontologies consists of providing the corresponding entities in these ontologies. This process is precisely defined in deliverable D2.2.1. The current deliverable presents the many techniques currently used for implementing this process. These techniques are classified along the many features that can be found in ontologies (labels, structures, instances, semantics). They resort to many different disciplines such as statistics, machine learning or data analysis. The alignment itself is obtained by combining these techniques towards a particular goal (obtaining an alignment with particular features, optimising some criterion). Several combination techniques are also presented. Finally, these techniques have been experimented in various systems for ontology alignment or schema matching. Several such systems are presented briefly in the last section and characterized by the above techniques they rely on. The conclusion is that many techniques are available for achieving ontology alignment and many systems have been developed based on these techniques. However, few comparisons and few integration is actually provided by these implementations. This deliverable serves as a basis for considering further action along these two lines. It provide a first inventory of what should be evaluated and suggests what evaluation criterion can be used.
  6. Binder, G.; Stahl, M.; Faulborn, L.: Vergleichsuntersuchung MESSENGER-FULCRUM (2000) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 4885) [ClassicSimilarity], result of:
              0.038116705 = score(doc=4885,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 4885, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4885)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In einem Benutzertest, der im Rahmen der Projektes GIRT stattfand, wurde die Leistungsfähigkeit zweier Retrievalsprachen für die Datenbankrecherche überprüft. Die Ergebnisse werden in diesem Bericht dargestellt: Das System FULCRUM beruht auf automatischer Indexierung und liefert ein nach statistischer Relevanz sortiertes Suchergebnis. Die Standardfreitextsuche des Systems MESSENGER wurde um die intellektuell vom IZ vergebenen Deskriptoren ergänzt. Die Ergebnisse zeigen, dass in FULCRUM das Boole'sche Exakt-Match-Retrieval dem Verktos-Space-Modell (Best-Match-Verfahren) von den Versuchspersonen vorgezogen wurde. Die in MESSENGER realisierte Mischform aus intellektueller und automatischer Indexierung erwies sich gegenüber dem quantitativ-statistischen Ansatz beim Recall als überlegen
  7. Hildebrand, M.; Ossenbruggen, J. van; Hardman, L.: ¬An analysis of search-based user interaction on the Semantic Web (2007) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 59) [ClassicSimilarity], result of:
              0.038116705 = score(doc=59,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 59, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=59)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we consider the functionality that the system provides and how this is made available through the user interface.
  8. Puzicha, J.: Informationen finden! : Intelligente Suchmaschinentechnologie & automatische Kategorisierung (2007) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 2817) [ClassicSimilarity], result of:
              0.03267146 = score(doc=2817,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 2817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2817)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wie in diesem Text erläutert wurde, ist die Effektivität von Such- und Klassifizierungssystemen durch folgendes bestimmt: 1) den Arbeitsauftrag, 2) die Genauigkeit des Systems, 3) den zu erreichenden Automatisierungsgrad, 4) die Einfachheit der Integration in bereits vorhandene Systeme. Diese Kriterien gehen davon aus, dass jedes System, unabhängig von der Technologie, in der Lage ist, Grundvoraussetzungen des Produkts in Bezug auf Funktionalität, Skalierbarkeit und Input-Methode zu erfüllen. Diese Produkteigenschaften sind in der Recommind Produktliteratur genauer erläutert. Von diesen Fähigkeiten ausgehend sollte die vorhergehende Diskussion jedoch einige klare Trends aufgezeigt haben. Es ist nicht überraschend, dass jüngere Entwicklungen im Maschine Learning und anderen Bereichen der Informatik einen theoretischen Ausgangspunkt für die Entwicklung von Suchmaschinen- und Klassifizierungstechnologie haben. Besonders jüngste Fortschritte bei den statistischen Methoden (PLSA) und anderen mathematischen Werkzeugen (SVMs) haben eine Ergebnisqualität auf Durchbruchsniveau erreicht. Dazu kommt noch die Flexibilität in der Anwendung durch Selbsttraining und Kategorienerkennen von PLSA-Systemen, wie auch eine neue Generation von vorher unerreichten Produktivitätsverbesserungen.
  9. Sykes, J.: Making solid business decisions through intelligent indexing taxonomies : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2003) 0.01
    0.007700737 = product of:
      0.015401474 = sum of:
        0.015401474 = product of:
          0.030802948 = sum of:
            0.030802948 = weight(_text_:systems in 721) [ClassicSimilarity], result of:
              0.030802948 = score(doc=721,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19207339 = fieldWeight in 721, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=721)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2000, Factiva published "The Value of Indexing," a white paper emphasizing the strategic importance of accurate categorization, based on a robust taxonomy for later retrieval of documents stored in commercial or in-house content repositories. Since that time, there has been resounding agreement between persons who use Web-based systems and those who design these systems that search engines alone are not the answer for effective information retrieval. High-quality categorization is crucial if users are to be able to find the right answers in repositories of articles and documents that are expanding at phenomenal rates. Companies continue to invest in technologies that will help them organize and integrate their content. A March 2002 article in EContent suggests a typical taxonomy implementation usually costs around $100,000. The article also cites a Merrill Lynch study that predicts the market for search and categorization products, now at about $600 million, will more than double by 2005. Classification activities are not new. In the third century B.C., Callimachus of Cyrene managed the ancient Library of Alexandria. To help scholars find items in the collection, he created an index of all the scrolls organized according to a subject taxonomy. Factiva's parent companies, Dow Jones and Reuters, each have more than 20 years of experience with developing taxonomies and painstaking manual categorization processes and also have a solid history with automated categorization techniques. This experience and expertise put Factiva at the leading edge of developing and applying categorization technology today. This paper will update readers about enhancements made to the Factiva Intelligent IndexingT taxonomy. It examines the value these enhancements bring to Factiva's news and business information service, and the value brought to clients who license the Factiva taxonomy as a fundamental component of their own Enterprise Knowledge Architecture. There is a behind-the-scenes-look at how Factiva classifies a huge stream of incoming articles published in a variety of formats and languages. The paper concludes with an overview of new Factiva services and solutions that are designed specifically to help clients improve productivity and make solid business decisions by precisely finding information in their own everexpanding content repositories.
  10. Bredemeier, W.; Stock, M.; Stock, W.G.: ¬Die Branche elektronischer Geschäftsinformationen in Deutschland 2000/2001 (2001) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 621) [ClassicSimilarity], result of:
              0.027226217 = score(doc=621,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 621, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Der deutsche Markt für Elektronische Informationsdienste im Jahre 2000 - Ergebnisse einer Umsatzerhebung - Von Willi Bredemeier: - Abgesicherte Methodologie unter Berücksichtigung der Spezifika des EIS-Marktes und der aktuellen Entwicklung - teilweise Vergleichbarkeit der Daten ab 1989 - Weitgehende quantitative Markttransparenz, da der Leser die Aggregationen der Markt- und Teilmarktdaten aus einzelwirtschaftlichen Daten voll nachvollziehen kann - 93 zum Teil ausführliche Tabellen vorwiegend zu einzelnen Informationsanbietern unter besonderer Berücksichtigung der Geschäftsjahre 2000 und 1999, unterteilt in die Bereiche Gesamtmarkt für Elektronische Informationsdienste, Datev, Realtime-Finanzinformationen, Nachrichtenagenturen, Kreditinformationen, Firmen- und Produktinformationen, weitere Wirtschaftsinformationen, Rechtsinformationen, Wissenschaftlich-technisch-medizinische Informationen - Intellectual Property, Konsumentendienste, Nachbarmärkte - Analyse aktueller Markttrends. Qualität professioneller Firmeninformationen im World Wide Web - Von Mechtild Stock und Wolfgang G. Stock: - Weiterführung der Qualitätsdiskussion und Entwicklung eines Systems von Qualitätskriterien für Informationsangebote, bezogen auf Firmeninformationen im Internet - "Qualitätspanel" für die Bereiche Bonitätsinformationen, Firmenkurzdossiers, Produktinformationen und Adressinformationen mit den Anbietern Bürgel, Creditreform, Dun & Bradstreet Deutschland, ABC online, ALLECO, Hoppenstedt Firmendatenbank, Who is Who in Multimedia, Kompass Deutschland, Sachon Industriedaten, Wer liefert was?, AZ Bertelsmann, Schober.com - Hochdifferenzierte Tests, die den Kunden Hilfen bei der Auswahl zwischen Angeboten und den Anbietern Hinweise auf Maßnahmen zu qualitativen Verbesserungen geben - Detaillierte Informationen über eingesetzte Systeme der Branchen- und Produktklassifikationen - Rankings der Firmeninformationsanbieter insgesamt sowie nach Datenbasen, Retrievalsystemen und Websites, Detailinformationen zu allen Qualitätsdimensionen