Search (435 results, page 22 of 22)

  • × type_ss:"x"
  1. Kirk, J.: Theorising information use : managers and their work (2002) 0.00
    5.8168895E-4 = product of:
      0.0023267558 = sum of:
        0.0023267558 = product of:
          0.0069802674 = sum of:
            0.0069802674 = weight(_text_:a in 560) [ClassicSimilarity], result of:
              0.0069802674 = score(doc=560,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12611452 = fieldWeight in 560, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Content
    A thesis submitted to the University of Technology, Sydney in fulfilment of the requirements for the degree of Doctor of Philosophy. - Vgl. unter: http://epress.lib.uts.edu.au/dspace/bitstream/2100/309/2/02whole.pdf.
  2. Garfield, E.: ¬An algorithm for translating chemical names to molecular formulas (1961) 0.00
    5.0887186E-4 = product of:
      0.0020354874 = sum of:
        0.0020354874 = product of:
          0.006106462 = sum of:
            0.006106462 = weight(_text_:a in 3465) [ClassicSimilarity], result of:
              0.006106462 = score(doc=3465,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.11032722 = fieldWeight in 3465, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3465)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This dissertation discusses, explains, and demonstrates a new algorithm for translating chemica l nomenclature into molecular formulas. In order to place the study in its proper context and perspective the historical development of nomenclature is first discussed, aa well as other related aspects of the chemical information problem. The relationship of nomenclature to modern linguistic studies is then introduced. Tire relevance of structural linguistic procedures to the study of chemical nomenclature is shown. The methods of the linguist are illustrated by examples from chemical discourse. The algorithm is then explained, first for the human translator and then for use by a computer. Flow diagrams for the computer syntactic analysis, dictionary Iook-up routine, and furmula calculation routine are included. The sampling procedure for testing the algorithm is explained and finalIy, conclusions are drawn with respect to the general validity of the method and the dirsction that might be taken for future research. A summary of modern chemical nomenclature practice is appened primarily for use by the reader who is not familiar with chemical nomenclature.
  3. Mönnich, M.W.: Personalcomputer in Bibliotheken : Arbeitsplätze für Benutzer (1991) 0.00
    4.985905E-4 = product of:
      0.001994362 = sum of:
        0.001994362 = product of:
          0.005983086 = sum of:
            0.005983086 = weight(_text_:a in 5389) [ClassicSimilarity], result of:
              0.005983086 = score(doc=5389,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10809815 = fieldWeight in 5389, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5389)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Classification
    Bib A 589 Kleincomputer
    SBB
    Bib A 589 Kleincomputer
  4. Cieloch, A.: Erarbeitung der Grundlage für die Implementierung der Bilddatenbank IPS im Unternehmens- und Pressearchiv der OTTO Group (2004) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 2369) [ClassicSimilarity], result of:
              0.005640907 = score(doc=2369,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 2369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2369)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  5. Schmidtke, R.: Auswahl und Implementierung eines Onlinesystems zur Digitalisierung und Umstrukturierung des firmeninternen Branchenarchivs einer Unternehmensberatung für Mergers & Akquisitions (2004) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 3715) [ClassicSimilarity], result of:
              0.005640907 = score(doc=3715,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 3715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3715)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Für das Branchenarchiv der Unternehmensberatung »Angermann M&A International GmbH« werden bestehende Aufgaben, Strukturen und Anforderungsprofile analysiert mit dem Ziel künftiger Digitalisierung und Umstrukturierung. Anhand des so gewonnen Kriterienkataloges werden mehrere Archivsysteme verglichen, bewertet und ausgewählt, das »vielversprechendste« (?) Programm wird genauer untersucht.
  6. Czechowski, M.: Konzept zur Realisierung eines digitalen Firmenarchivs am Beispiel Deutsche Lufthansa AG (2006) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 109) [ClassicSimilarity], result of:
              0.005640907 = score(doc=109,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=109)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Die vorliegende Diplomarbeit zeigt auf, wie ein digitales, web-basiertes Wirtschaftsarchiv trotz begrenzter Ressourcen realisiert werden kann. Dabei werden auch die Probleme der Langzeitarchivierung thematisiert und der neue ISO-Standard PDF/A als neuer Lösungsansatz vorgestellt. Weiter wird ein Information-Retrieval-System vorgestellt, das die mitunter arbeitsaufwändige inhaltliche Erschließung eigenständig durchführt und sowohl Laien als auch Spezialisten eine angemessene Recherchemöglichkeit bietet. Das Konzept ist ausschliesslich auf private Wirtschaftsarchive ausgerichtet, da rechtliche Aspekte - die z.B. in der revissionsicheren Archivierung eine Rolle spielen - nicht berücksichtigt werden.
  7. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 694) [ClassicSimilarity], result of:
              0.005640907 = score(doc=694,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 694, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=694)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  8. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 117) [ClassicSimilarity], result of:
              0.005640907 = score(doc=117,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 117, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  9. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 535) [ClassicSimilarity], result of:
              0.005640907 = score(doc=535,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 535, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
    Footnote
    A thesis submitted in ful llment of the requirements for the degree of Doctor of Philosophy Monash University, Faculty of Information Technology.
  10. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.00
    4.551488E-4 = product of:
      0.0018205952 = sum of:
        0.0018205952 = product of:
          0.0054617855 = sum of:
            0.0054617855 = weight(_text_:a in 4472) [ClassicSimilarity], result of:
              0.0054617855 = score(doc=4472,freq=30.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.09867966 = fieldWeight in 4472, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Depuis son apparition au début des années 90, le World Wide Web (WWW ou Web) a offert un accès universel aux connaissances et le monde de l'information a été principalement témoin d'une grande révolution (la révolution numérique). Il est devenu rapidement très populaire, ce qui a fait de lui la plus grande et vaste base de données et de connaissances existantes grâce à la quantité et la diversité des données qu'il contient. Cependant, l'augmentation et l'évolution considérables de ces données soulèvent d'importants problèmes pour les utilisateurs notamment pour l'accès aux documents les plus pertinents à leurs requêtes de recherche. Afin de faire face à cette explosion exponentielle du volume de données et faciliter leur accès par les utilisateurs, différents modèles sont proposés par les systèmes de recherche d'information (SRIs) pour la représentation et la recherche des documents web. Les SRIs traditionnels utilisent, pour indexer et récupérer ces documents, des mots-clés simples qui ne sont pas sémantiquement liés. Cela engendre des limites en termes de la pertinence et de la facilité d'exploration des résultats. Pour surmonter ces limites, les techniques existantes enrichissent les documents en intégrant des mots-clés externes provenant de différentes sources. Cependant, ces systèmes souffrent encore de limitations qui sont liées aux techniques d'exploitation de ces sources d'enrichissement. Lorsque les différentes sources sont utilisées de telle sorte qu'elles ne peuvent être distinguées par le système, cela limite la flexibilité des modèles d'exploration qui peuvent être appliqués aux résultats de recherche retournés par ce système. Les utilisateurs se sentent alors perdus devant ces résultats, et se retrouvent dans l'obligation de les filtrer manuellement pour sélectionner l'information pertinente. S'ils veulent aller plus loin, ils doivent reformuler et cibler encore plus leurs requêtes de recherche jusqu'à parvenir aux documents qui répondent le mieux à leurs attentes. De cette façon, même si les systèmes parviennent à retrouver davantage des résultats pertinents, leur présentation reste problématique. Afin de cibler la recherche à des besoins d'information plus spécifiques de l'utilisateur et améliorer la pertinence et l'exploration de ses résultats de recherche, les SRIs avancés adoptent différentes techniques de personnalisation de données qui supposent que la recherche actuelle d'un utilisateur est directement liée à son profil et/ou à ses expériences de navigation/recherche antérieures. Cependant, cette hypothèse ne tient pas dans tous les cas, les besoins de l'utilisateur évoluent au fil du temps et peuvent s'éloigner de ses intérêts antérieurs stockés dans son profil.
    Dans d'autres cas, le profil de l'utilisateur peut être mal exploité pour extraire ou inférer ses nouveaux besoins en information. Ce problème est beaucoup plus accentué avec les requêtes ambigües. Lorsque plusieurs centres d'intérêt auxquels est liée une requête ambiguë sont identifiés dans le profil de l'utilisateur, le système se voit incapable de sélectionner les données pertinentes depuis ce profil pour répondre à la requête. Ceci a un impact direct sur la qualité des résultats fournis à cet utilisateur. Afin de remédier à quelques-unes de ces limitations, nous nous sommes intéressés dans ce cadre de cette thèse de recherche au développement de techniques destinées principalement à l'amélioration de la pertinence des résultats des SRIs actuels et à faciliter l'exploration de grandes collections de documents. Pour ce faire, nous proposons une solution basée sur un nouveau concept d'indexation et de recherche d'information appelé la projection multi-espaces. Cette proposition repose sur l'exploitation de différentes catégories d'information sémantiques et sociales qui permettent d'enrichir l'univers de représentation des documents et des requêtes de recherche en plusieurs dimensions d'interprétations. L'originalité de cette représentation est de pouvoir distinguer entre les différentes interprétations utilisées pour la description et la recherche des documents. Ceci donne une meilleure visibilité sur les résultats retournés et aide à apporter une meilleure flexibilité de recherche et d'exploration, en donnant à l'utilisateur la possibilité de naviguer une ou plusieurs vues de données qui l'intéressent le plus. En outre, les univers multidimensionnels de représentation proposés pour la description des documents et l'interprétation des requêtes de recherche aident à améliorer la pertinence des résultats de l'utilisateur en offrant une diversité de recherche/exploration qui aide à répondre à ses différents besoins et à ceux des autres différents utilisateurs. Cette étude exploite différents aspects liés à la recherche personnalisée et vise à résoudre les problèmes engendrés par l'évolution des besoins en information de l'utilisateur. Ainsi, lorsque le profil de cet utilisateur est utilisé par notre système, une technique est proposée et employée pour identifier les intérêts les plus représentatifs de ses besoins actuels dans son profil. Cette technique se base sur la combinaison de trois facteurs influents, notamment le facteur contextuel, fréquentiel et temporel des données. La capacité des utilisateurs à interagir, à échanger des idées et d'opinions, et à former des réseaux sociaux sur le Web, a amené les systèmes à s'intéresser aux types d'interactions de ces utilisateurs, au niveau d'interaction entre eux ainsi qu'à leurs rôles sociaux dans le système. Ces informations sociales sont abordées et intégrées dans ce travail de recherche. L'impact et la manière de leur intégration dans le processus de RI sont étudiés pour améliorer la pertinence des résultats.
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  11. Höllstin, A.: Bibliotheks- und Informationskompetenz (Bibliographic Instruction und Information Literacy) : Fallstudie über eine amerikanische Universitätsbibliothek basierend auf theoretischen Grundlagen und praktischen Anleitungen (Workbooks) (1997) 0.00
    4.1131617E-4 = product of:
      0.0016452647 = sum of:
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 1485) [ClassicSimilarity], result of:
              0.004935794 = score(doc=1485,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 1485, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1485)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  12. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.00
    4.1131617E-4 = product of:
      0.0016452647 = sum of:
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 5496) [ClassicSimilarity], result of:
              0.004935794 = score(doc=5496,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 5496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5496)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The digitalization of research enables new scientific insights and methods, especially in the humanities. Nonetheless, electronic book editions, encyclopedias, mobile applications or web sites presenting research projects are not in broad use in academic philosophy. This is contradictory to the large amount of helpful tools facilitating research also bearing new scientific subjects and approaches. A possible solution to this dilemma is the systematization and promotion of these tools in order to improve their accessibility and fully exploit the potential of digitalization for philosophy.
  13. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.00
    3.3239368E-4 = product of:
      0.0013295747 = sum of:
        0.0013295747 = product of:
          0.003988724 = sum of:
            0.003988724 = weight(_text_:a in 4659) [ClassicSimilarity], result of:
              0.003988724 = score(doc=4659,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.072065435 = fieldWeight in 4659, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  14. Engel, F.: Expertensuche in semantisch integrierten Datenbeständen (2015) 0.00
    3.3239368E-4 = product of:
      0.0013295747 = sum of:
        0.0013295747 = product of:
          0.003988724 = sum of:
            0.003988724 = weight(_text_:a in 2283) [ClassicSimilarity], result of:
              0.003988724 = score(doc=2283,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.072065435 = fieldWeight in 2283, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2283)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Wissen ist das intellektuelle Kapital eines Unternehmens und der effektive Zugriff darauf entscheidend für die Anpassungsfähigkeit und Innovationskraft. Eine häufig angewandte Lösung für den erfolgreichen Zugriff auf diese Wissensressourcen ist die Umsetzung der Expertensuche in den Daten der verteilten Informationssysteme des Unternehmens. Aktuelle Expertensuchverfahren berücksichtigen zur Berechnung der Relevanz eines Kandidaten zumeist nur die Information aus Datenquellen (u. a. E-Mails oder Publikationen eines Kandidaten), über die eine Verbindung zwischen dem Thema der Frage und einem Kandidaten hergestellt werden kann. Die aus den Datenquellen gewonnene Information, fließt dann gewichtet in die Relevanzbewertung ein. Analysen aus dem Fachbereich Wissensmanagement zeigen jedoch, dass neben dem Themenbezug auch noch weitere Kriterien Einfluss auf die Auswahl eines Experten in einer Expertensuche haben können (u. a. der Bekanntheitsgrad zwischen dem Suchenden und Kandidat). Um eine optimale Gewichtung der unterschiedlichen Bestandteile und Quellen, aus denen sich die Berechnung der Relevanz speist, zu finden, werden in aktuellen Anwendungen zur Suche nach Dokumenten oder zur Suche im Web verschiedene Verfahren aus dem Umfeld des maschinellen Lernens eingesetzt. Jedoch existieren derzeit nur sehr wenige Arbeiten zur Beantwortung der Frage, wie gut sich diese Verfahren eignen um auch in der Expertensuche verschiedene Bestandteile der Relevanzbestimmung optimal zusammenzuführen. Informationssysteme eines Unternehmens können komplex sein und auf einer verteilten Datenhaltung basieren. Zunehmend finden Technologien aus dem Umfeld des Semantic Web Akzeptanz in Unternehmen, um eine einheitliche Zugriffsschnittstelle auf den verteilten Datenbestand zu gewährleisten. Der Zugriff auf eine derartige Zugriffschnittstelle erfolgt dabei über Abfragesprachen, welche lediglich eine alphanumerische Sortierung der Rückgabe erlauben, jedoch keinen Rückschluss auf die Relevanz der gefundenen Objekte zulassen. Für die Suche nach Experten in einem derartig aufbereiteten Datenbestand bedarf es zusätzlicher Berechnungsverfahren, die einen Rückschluss auf den Relevanzwert eines Kandidaten ermöglichen. In dieser Arbeit soll zum einen ein Beitrag geleistet werden, der die Anwendbarkeit lernender Verfahren zur effektiven Aggregation unterschiedlicher Kriterien in der Suche nach Experten zeigt. Zum anderen soll in dieser Arbeit nach Möglichkeiten geforscht werden, wie die Relevanz eines Kandidaten über Zugriffsschnittstellen berechnet werden kann, die auf Technologien aus dem Umfeld des Semantic Web basieren.
  15. Pfäffli, W.: ¬La qualité des résultats de recherche dans le cadre du projet MACS (Multilingual Access to Subjects) : vers un élargissement des ensembles de résultats de recherche (2009) 0.00
    2.0565809E-4 = product of:
      8.2263234E-4 = sum of:
        8.2263234E-4 = product of:
          0.002467897 = sum of:
            0.002467897 = weight(_text_:a in 2818) [ClassicSimilarity], result of:
              0.002467897 = score(doc=2818,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.044588212 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2818)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    L'examen des titres communs nous a montré qu'en tous les cas, une partie des titres pertinents échapperaient à une requête effectuée par l'intermédiaire du lien. Il nous semble donc plus important que les efforts se concentrent sur les moyens d'effectivement donner un accès à des documents potentiellement pertinents plutôt que de définir plus précisément la qualité des liens au vu des résultats. Une première voie est le recours aux relations hiérarchiques des langages documentaires, mais nous avons vu qu'elles ne sont pas en mesure d'apporter une solution dans tous les cas. Le recours à une classification, à une ontologie ou à des techniques de traitement automatique du langage sont d'autres voies à explorer, qui peuvent éviter de devoir multiplier les liens, et par là compliquer encore leur gestion. En chemin, nous avons rencontré , mais sans pouvoir les aborder, encore bien d'autres questions, qui sont toutes autant de défis supplémentaires à relever, comme le problème de l'accès aux titres non indexés ou le problème de l'évolution des langages documentaires et donc de la mise à jour des liens. Nous avons aussi laissé de côté les questions techniques de l'accès de l'interface aux différents catalogues et des possibilités de présentations des résultats de recherche proprement dits (par bibliothèque interrogée ou réunis en un ensemble, ranking). Il reste ainsi assez à faire jusqu'au jour où un usager pourra entrer un terme de recherche dans une interface conviviale, qui lui ouvrira un accès thématique simple mais complet aux ressources des bibliothèques d'Europe, puis du monde !

Languages

  • d 385
  • e 43
  • f 2
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 28
  • m 17
  • a 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications