Search (40 results, page 2 of 2)

  • × type_ss:"x"
  • × year_i:[2010 TO 2020}
  1. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0019376777 = product of:
      0.0038753555 = sum of:
        0.0038753555 = product of:
          0.007750711 = sum of:
            0.007750711 = weight(_text_:a in 4232) [ClassicSimilarity], result of:
              0.007750711 = score(doc=4232,freq=42.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14594918 = fieldWeight in 4232, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data. There is a difference in the way users interact with resources, visually or textually, and how resources are represented for machines to be processed by algorithms. This difference complicates bridging the users' intents and machine executable queries. It is important to implement this 'translation' mechanism to impact the search as favorable as possible in terms of performance, complexity and accuracy. To do this, we explain a second technique, that supports such a bridging component. Our second technique is developed around three features that support the search process: looking up, relating and ranking resources. The main goal is to ensure that resources in the results are as precise and relevant as possible. During the evaluation of this technique, we did not only look at the precision of the search results but also investigated how the effectiveness of the search evolved while the user executed certain actions sequentially.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  2. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 4839) [ClassicSimilarity], result of:
              0.007030784 = score(doc=4839,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 4839, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4839)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.
    Footnote
    A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy. June 2011.
  3. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1172) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1172,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1810) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1810,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Lamparter, A.: Kompetenzprofil von Information Professionals in Unternehmen (2015) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 769) [ClassicSimilarity], result of:
              0.00669738 = score(doc=769,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 769, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Masterarbeit an der Hochschule Hannover, Fakultät III - Medien, Information und Design. Trägerin des VFI-Förderpreises 2015, Vgl.: urn:nbn:de:bsz:960-opus4-5280. http://serwiss.bib.hs-hannover.de/frontdoor/index/index/docId/528. Vgl. auch: Knoll, A. (geb. Lamparter): Kompetenzprofil von Information Professionals in Unternehmen. In: Young information professionals. 1(2016) S.1-11.
  6. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 117) [ClassicSimilarity], result of:
              0.0054123 = score(doc=117,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 117, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  7. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 535) [ClassicSimilarity], result of:
              0.0054123 = score(doc=535,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 535, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
    Footnote
    A thesis submitted in ful llment of the requirements for the degree of Doctor of Philosophy Monash University, Faculty of Information Technology.
  8. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.00
    0.0013101093 = product of:
      0.0026202186 = sum of:
        0.0026202186 = product of:
          0.005240437 = sum of:
            0.005240437 = weight(_text_:a in 4472) [ClassicSimilarity], result of:
              0.005240437 = score(doc=4472,freq=30.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.09867966 = fieldWeight in 4472, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Depuis son apparition au début des années 90, le World Wide Web (WWW ou Web) a offert un accès universel aux connaissances et le monde de l'information a été principalement témoin d'une grande révolution (la révolution numérique). Il est devenu rapidement très populaire, ce qui a fait de lui la plus grande et vaste base de données et de connaissances existantes grâce à la quantité et la diversité des données qu'il contient. Cependant, l'augmentation et l'évolution considérables de ces données soulèvent d'importants problèmes pour les utilisateurs notamment pour l'accès aux documents les plus pertinents à leurs requêtes de recherche. Afin de faire face à cette explosion exponentielle du volume de données et faciliter leur accès par les utilisateurs, différents modèles sont proposés par les systèmes de recherche d'information (SRIs) pour la représentation et la recherche des documents web. Les SRIs traditionnels utilisent, pour indexer et récupérer ces documents, des mots-clés simples qui ne sont pas sémantiquement liés. Cela engendre des limites en termes de la pertinence et de la facilité d'exploration des résultats. Pour surmonter ces limites, les techniques existantes enrichissent les documents en intégrant des mots-clés externes provenant de différentes sources. Cependant, ces systèmes souffrent encore de limitations qui sont liées aux techniques d'exploitation de ces sources d'enrichissement. Lorsque les différentes sources sont utilisées de telle sorte qu'elles ne peuvent être distinguées par le système, cela limite la flexibilité des modèles d'exploration qui peuvent être appliqués aux résultats de recherche retournés par ce système. Les utilisateurs se sentent alors perdus devant ces résultats, et se retrouvent dans l'obligation de les filtrer manuellement pour sélectionner l'information pertinente. S'ils veulent aller plus loin, ils doivent reformuler et cibler encore plus leurs requêtes de recherche jusqu'à parvenir aux documents qui répondent le mieux à leurs attentes. De cette façon, même si les systèmes parviennent à retrouver davantage des résultats pertinents, leur présentation reste problématique. Afin de cibler la recherche à des besoins d'information plus spécifiques de l'utilisateur et améliorer la pertinence et l'exploration de ses résultats de recherche, les SRIs avancés adoptent différentes techniques de personnalisation de données qui supposent que la recherche actuelle d'un utilisateur est directement liée à son profil et/ou à ses expériences de navigation/recherche antérieures. Cependant, cette hypothèse ne tient pas dans tous les cas, les besoins de l'utilisateur évoluent au fil du temps et peuvent s'éloigner de ses intérêts antérieurs stockés dans son profil.
    Dans d'autres cas, le profil de l'utilisateur peut être mal exploité pour extraire ou inférer ses nouveaux besoins en information. Ce problème est beaucoup plus accentué avec les requêtes ambigües. Lorsque plusieurs centres d'intérêt auxquels est liée une requête ambiguë sont identifiés dans le profil de l'utilisateur, le système se voit incapable de sélectionner les données pertinentes depuis ce profil pour répondre à la requête. Ceci a un impact direct sur la qualité des résultats fournis à cet utilisateur. Afin de remédier à quelques-unes de ces limitations, nous nous sommes intéressés dans ce cadre de cette thèse de recherche au développement de techniques destinées principalement à l'amélioration de la pertinence des résultats des SRIs actuels et à faciliter l'exploration de grandes collections de documents. Pour ce faire, nous proposons une solution basée sur un nouveau concept d'indexation et de recherche d'information appelé la projection multi-espaces. Cette proposition repose sur l'exploitation de différentes catégories d'information sémantiques et sociales qui permettent d'enrichir l'univers de représentation des documents et des requêtes de recherche en plusieurs dimensions d'interprétations. L'originalité de cette représentation est de pouvoir distinguer entre les différentes interprétations utilisées pour la description et la recherche des documents. Ceci donne une meilleure visibilité sur les résultats retournés et aide à apporter une meilleure flexibilité de recherche et d'exploration, en donnant à l'utilisateur la possibilité de naviguer une ou plusieurs vues de données qui l'intéressent le plus. En outre, les univers multidimensionnels de représentation proposés pour la description des documents et l'interprétation des requêtes de recherche aident à améliorer la pertinence des résultats de l'utilisateur en offrant une diversité de recherche/exploration qui aide à répondre à ses différents besoins et à ceux des autres différents utilisateurs. Cette étude exploite différents aspects liés à la recherche personnalisée et vise à résoudre les problèmes engendrés par l'évolution des besoins en information de l'utilisateur. Ainsi, lorsque le profil de cet utilisateur est utilisé par notre système, une technique est proposée et employée pour identifier les intérêts les plus représentatifs de ses besoins actuels dans son profil. Cette technique se base sur la combinaison de trois facteurs influents, notamment le facteur contextuel, fréquentiel et temporel des données. La capacité des utilisateurs à interagir, à échanger des idées et d'opinions, et à former des réseaux sociaux sur le Web, a amené les systèmes à s'intéresser aux types d'interactions de ces utilisateurs, au niveau d'interaction entre eux ainsi qu'à leurs rôles sociaux dans le système. Ces informations sociales sont abordées et intégrées dans ce travail de recherche. L'impact et la manière de leur intégration dans le processus de RI sont étudiés pour améliorer la pertinence des résultats.
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  9. Brumm, A.: Modellierung eines Informationssystems zum Bühnentanz als semantisches Wiki (2010) 0.00
    0.0011839407 = product of:
      0.0023678814 = sum of:
        0.0023678814 = product of:
          0.0047357627 = sum of:
            0.0047357627 = weight(_text_:a in 4025) [ClassicSimilarity], result of:
              0.0047357627 = score(doc=4025,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.089176424 = fieldWeight in 4025, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4025)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Lassak, L.: ¬Ein Versuch zur Repräsentation von Charakteren der Kinder- und Jugendbuchserie "Die drei ???" in einer Datenbank (2017) 0.00
    0.0011839407 = product of:
      0.0023678814 = sum of:
        0.0023678814 = product of:
          0.0047357627 = sum of:
            0.0047357627 = weight(_text_:a in 1784) [ClassicSimilarity], result of:
              0.0047357627 = score(doc=1784,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.089176424 = fieldWeight in 1784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1784)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Masterarbeit zur Erlangung des akademischen Grades Master of Arts (M. A.)
  11. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.00
    0.0011839407 = product of:
      0.0023678814 = sum of:
        0.0023678814 = product of:
          0.0047357627 = sum of:
            0.0047357627 = weight(_text_:a in 5496) [ClassicSimilarity], result of:
              0.0047357627 = score(doc=5496,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.089176424 = fieldWeight in 5496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5496)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The digitalization of research enables new scientific insights and methods, especially in the humanities. Nonetheless, electronic book editions, encyclopedias, mobile applications or web sites presenting research projects are not in broad use in academic philosophy. This is contradictory to the large amount of helpful tools facilitating research also bearing new scientific subjects and approaches. A possible solution to this dilemma is the systematization and promotion of these tools in order to improve their accessibility and fully exploit the potential of digitalization for philosophy.
  12. Engel, F.: Expertensuche in semantisch integrierten Datenbeständen (2015) 0.00
    9.567685E-4 = product of:
      0.001913537 = sum of:
        0.001913537 = product of:
          0.003827074 = sum of:
            0.003827074 = weight(_text_:a in 2283) [ClassicSimilarity], result of:
              0.003827074 = score(doc=2283,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.072065435 = fieldWeight in 2283, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wissen ist das intellektuelle Kapital eines Unternehmens und der effektive Zugriff darauf entscheidend für die Anpassungsfähigkeit und Innovationskraft. Eine häufig angewandte Lösung für den erfolgreichen Zugriff auf diese Wissensressourcen ist die Umsetzung der Expertensuche in den Daten der verteilten Informationssysteme des Unternehmens. Aktuelle Expertensuchverfahren berücksichtigen zur Berechnung der Relevanz eines Kandidaten zumeist nur die Information aus Datenquellen (u. a. E-Mails oder Publikationen eines Kandidaten), über die eine Verbindung zwischen dem Thema der Frage und einem Kandidaten hergestellt werden kann. Die aus den Datenquellen gewonnene Information, fließt dann gewichtet in die Relevanzbewertung ein. Analysen aus dem Fachbereich Wissensmanagement zeigen jedoch, dass neben dem Themenbezug auch noch weitere Kriterien Einfluss auf die Auswahl eines Experten in einer Expertensuche haben können (u. a. der Bekanntheitsgrad zwischen dem Suchenden und Kandidat). Um eine optimale Gewichtung der unterschiedlichen Bestandteile und Quellen, aus denen sich die Berechnung der Relevanz speist, zu finden, werden in aktuellen Anwendungen zur Suche nach Dokumenten oder zur Suche im Web verschiedene Verfahren aus dem Umfeld des maschinellen Lernens eingesetzt. Jedoch existieren derzeit nur sehr wenige Arbeiten zur Beantwortung der Frage, wie gut sich diese Verfahren eignen um auch in der Expertensuche verschiedene Bestandteile der Relevanzbestimmung optimal zusammenzuführen. Informationssysteme eines Unternehmens können komplex sein und auf einer verteilten Datenhaltung basieren. Zunehmend finden Technologien aus dem Umfeld des Semantic Web Akzeptanz in Unternehmen, um eine einheitliche Zugriffsschnittstelle auf den verteilten Datenbestand zu gewährleisten. Der Zugriff auf eine derartige Zugriffschnittstelle erfolgt dabei über Abfragesprachen, welche lediglich eine alphanumerische Sortierung der Rückgabe erlauben, jedoch keinen Rückschluss auf die Relevanz der gefundenen Objekte zulassen. Für die Suche nach Experten in einem derartig aufbereiteten Datenbestand bedarf es zusätzlicher Berechnungsverfahren, die einen Rückschluss auf den Relevanzwert eines Kandidaten ermöglichen. In dieser Arbeit soll zum einen ein Beitrag geleistet werden, der die Anwendbarkeit lernender Verfahren zur effektiven Aggregation unterschiedlicher Kriterien in der Suche nach Experten zeigt. Zum anderen soll in dieser Arbeit nach Möglichkeiten geforscht werden, wie die Relevanz eines Kandidaten über Zugriffsschnittstellen berechnet werden kann, die auf Technologien aus dem Umfeld des Semantic Web basieren.
  13. Meyer, A.: Begriffsrelationen im Kategoriensystem der Wikipedia : Entwicklung eines Relationeninventars zur kollaborativen Anwendung (2010) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 4429) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=4429,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 4429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Waldhör, A.: Erstellung einer Konkordanz zwischen Basisklassifikation (BK) und Regensburger Verbundklassifikation (RVK) für den Fachbereich Recht (2012) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 596) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=596,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=596)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Knitel, M.: ¬The application of linked data principles to library data : opportunities and challenges (2012) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 599) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=599,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Location
    A
  16. Fischer, M.: Sacherschliessung - quo vadis? : Die Neuausrichtung der Sacherschliessung im deutschsprachigen Raum (2015) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 2029) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=2029,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 2029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2029)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Location
    A
  17. Ammann, A.: Klassifikation dynamischer Wissensräume : multifaktorielle Wechselbeziehungen zur Generierung und Gestaltung konstellativer dynamischer und mehrdimensionaler Wissensräume mit einem Fokus der Anwendung in der Zahn-, Mund- und Kieferheilkunde am Beispiel der enossalen Implantologie (2012) 0.00
    7.1757636E-4 = product of:
      0.0014351527 = sum of:
        0.0014351527 = product of:
          0.0028703054 = sum of:
            0.0028703054 = weight(_text_:a in 1751) [ClassicSimilarity], result of:
              0.0028703054 = score(doc=1751,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.054049075 = fieldWeight in 1751, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1751)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Im Teil A wird, neben dem menschlichen Urbedürfnis mit dem Streben nach dem Wissen, auf die vier epochalen Konvergenz-Zyklen mit ihren Kompetenzprofilen der Wissensordnungen im Wissenstransfer eingegangen. Insbesondere die Verschiebungen der Wissenschaftssprachen nehmen dabei einen erheblichen Einfluss auf die Abgrenzung der Klassifikationen zum impliziten, visuellen und explizitem Wissen. Daher werden den Äquivalenztypen im expliziten Wissensraum einer besonderen Aufmerksamkeit gewidmet, denn in unserer multilingualen Wissenslandschaft entstehen im Wissenstransfer zum Verfügungs-, Orientierungs- und Handlungswissen Artefakte, die auch auf die Gestaltung der Lernziel-Taxonomien einen Einfluss haben. Im Teil B werden zunächst die Arten, Merkmale und Klassifikationskonzepte des Wissens behandelt. Bei dem Versuch einer neuen Wissensordnung wird das kartesische / geodätische Koordinatensystem in ein Raum-Zeit-Gefüge gestellt, aus dem sich elf Wissensräume herauskristallisiert haben, die sowohl in ihren Definitionen, den damit verbundenen Ableitungen und Beispielen und einer Verortung im Wissensraum klassifiziert werden. Im Projekt <K-Space Visual Library in Dental Medicine> wird die problem- und aufgabenorientierte Navigation in den jeweiligen Wissensräumen erläutert und in der Diskussion die noch bevorstehenden Konvergenz-Forderungen der meist noch bestehenden proprietären digitalen Technologien und Programme skizziert, um diese bei der Modellierung der Wissensräume mit einzubeziehen.
  18. Csákány, B.: Vom Zettelkatalog zum Volltext : über die Entwicklung und Funktion des Kataloges am Beispiel der Österreichischen Nationalbibliothek (2012) 0.00
    6.765375E-4 = product of:
      0.001353075 = sum of:
        0.001353075 = product of:
          0.00270615 = sum of:
            0.00270615 = weight(_text_:a in 600) [ClassicSimilarity], result of:
              0.00270615 = score(doc=600,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.050957955 = fieldWeight in 600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=600)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Location
    A
  19. Zudnik, J.: Artifizielle Semantik : Wider das Chinesische Zimmer (2017) 0.00
    6.765375E-4 = product of:
      0.001353075 = sum of:
        0.001353075 = product of:
          0.00270615 = sum of:
            0.00270615 = weight(_text_:a in 4426) [ClassicSimilarity], result of:
              0.00270615 = score(doc=4426,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.050957955 = fieldWeight in 4426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4426)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Talks at Google" hatte kürzlich einen Star zu Gast (Google 2016). Der gefeierte Philosoph referierte in gewohnt charmanter Art sein berühmtes Gedankenexperiment, welches er vor 35 Jahren ersonnen hatte. Aber es war keine reine Geschichtslektion, sondern er bestand darauf, daß die Implikationen nach wie vor Gültigkeit besaßen. Die Rede ist natürlich von John Searle und dem Chinesischen Zimmer. Searle eroberte damit ab 1980 die Welt der Philosophie des Geistes, indem er bewies, daß man Computer besprechen kann, ohne etwas von ihnen zu verstehen. In seinen Worten, man könne ohnehin die zugrunde liegenden Konzepte dieser damned things in 5 Minuten erfassen. Dagegen verblassten die scheuen Einwände des AI-Starapologeten Ray Kurzweil der im Publikum saß, die jüngste Akquisition in Googles Talentpool. Searle wirkte wie die reine Verkörperung seiner Thesen, daß Berechnung, Logik und harte Fakten angesichts der vollen Entfaltung polyvalenter Sprachspiele eines menschlichen Bewußtseins im sozialen Raum der Kultur keine Macht über uns besitzen. Doch obwohl große Uneinigkeit bezüglich der Gültigkeit des chinesischen Zimmers besteht, und die logische Struktur des Arguments schon vor Jahrzehnten widerlegt worden ist, u. a. von Copeland (1993), wird erstaunlicherweise noch immer damit gehandelt. Es hat sich von einem speziellen Werkzeug zur Widerlegung der Starken AI These, wonach künstliche Intelligenz mit einer symbolverarbeitenden Rechenmaschine geschaffen werden kann, zu einem Argument für all die Fälle entwickelt, in welchen sich Philosophen des Geistes mit unbequemen Fragen bezüglich der Berechenbarkeit des menschlichen Geistes auseinandersetzen hätten können. Es ist also mit den Jahrzehnten zu einer Immunisierungs- und Konservierungsstrategie für all jene geworden, die sich Zeit erkaufen wollten, sich mit der wirklichen Komplexität auseinander zu setzen. Denn die Definition von Sinn ist eben plastisch, vor allem wenn die Pointe der Searlschen Geschichte noch immer eine hohe Suggestionskraft besitzt, da ihre Konklusion, man könne nicht von einer computationalen Syntax zu einer Semantik kommen, noch immer unzureichend widerlegt ist.
  20. Bredack, J.: Terminologieextraktion von Mehrwortgruppen in kunsthistorischen Fachtexten (2013) 0.00
    5.9197034E-4 = product of:
      0.0011839407 = sum of:
        0.0011839407 = product of:
          0.0023678814 = sum of:
            0.0023678814 = weight(_text_:a in 1054) [ClassicSimilarity], result of:
              0.0023678814 = score(doc=1054,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.044588212 = fieldWeight in 1054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1054)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    a