Search (159 results, page 1 of 8)

  • × type_ss:"x"
  1. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.06
    0.06443933 = product of:
      0.0859191 = sum of:
        0.0075178547 = weight(_text_:a in 1536) [ClassicSimilarity], result of:
          0.0075178547 = score(doc=1536,freq=22.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.14788237 = fieldWeight in 1536, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1536)
        0.053080566 = weight(_text_:et in 1536) [ClassicSimilarity], result of:
          0.053080566 = score(doc=1536,freq=4.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.25659403 = fieldWeight in 1536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1536)
        0.025320673 = product of:
          0.050641347 = sum of:
            0.050641347 = weight(_text_:al in 1536) [ClassicSimilarity], result of:
              0.050641347 = score(doc=1536,freq=4.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.25062904 = fieldWeight in 1536, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
    In this thesis, we focused on the automatic detection of multiword expressions in natural language texts. On the basis of the main contributions, we can argue that: - Supervised machine learning methods can be successfully applied for the automatic detection of different types of multiword expressions in natural language texts. - Machine learning-based multiword expression detection can be successfully carried out for English as well as for Hungarian. - Our supervised machine learning-based model was successfully applied to the automatic detection of nominal compounds from English raw texts. - We developed a Wikipedia-based dictionary labeling method to automatically detect English nominal compounds. - A prior knowledge of nominal compounds can enhance Named Entity Recognition, while previously identified named entities can assist the nominal compound identification process. - The machine learning-based method can also provide acceptable results when it was trained on an automatically generated silver standard corpus. - As named entities form one semantic unit and may consist of more than one word and function as a noun, we can treat them in a similar way to nominal compounds. - Our sequence labelling-based tool can be successfully applied for identifying verbal light verb constructions in two typologically different languages, namely English and Hungarian. - Domain adaptation techniques may help diminish the distance between domains in the automatic detection of light verb constructions. - Our syntax-based method can be successfully applied for the full-coverage identification of light verb constructions. As a first step, a data-driven candidate extraction method can be utilized. After, a machine learning approach that makes use of an extended and rich feature set selects LVCs among extracted candidates. - When a precise syntactic parser is available for the actual domain, the full-coverage identification can be performed better. In other cases, the usage of the sequence labeling method is recommended.
  2. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.06
    0.062216386 = product of:
      0.12443277 = sum of:
        0.0050165504 = weight(_text_:a in 4472) [ClassicSimilarity], result of:
          0.0050165504 = score(doc=4472,freq=30.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.09867966 = fieldWeight in 4472, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.11941622 = weight(_text_:et in 4472) [ClassicSimilarity], result of:
          0.11941622 = score(doc=4472,freq=62.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.57726383 = fieldWeight in 4472, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.5 = coord(2/4)
    
    Abstract
    Depuis son apparition au début des années 90, le World Wide Web (WWW ou Web) a offert un accès universel aux connaissances et le monde de l'information a été principalement témoin d'une grande révolution (la révolution numérique). Il est devenu rapidement très populaire, ce qui a fait de lui la plus grande et vaste base de données et de connaissances existantes grâce à la quantité et la diversité des données qu'il contient. Cependant, l'augmentation et l'évolution considérables de ces données soulèvent d'importants problèmes pour les utilisateurs notamment pour l'accès aux documents les plus pertinents à leurs requêtes de recherche. Afin de faire face à cette explosion exponentielle du volume de données et faciliter leur accès par les utilisateurs, différents modèles sont proposés par les systèmes de recherche d'information (SRIs) pour la représentation et la recherche des documents web. Les SRIs traditionnels utilisent, pour indexer et récupérer ces documents, des mots-clés simples qui ne sont pas sémantiquement liés. Cela engendre des limites en termes de la pertinence et de la facilité d'exploration des résultats. Pour surmonter ces limites, les techniques existantes enrichissent les documents en intégrant des mots-clés externes provenant de différentes sources. Cependant, ces systèmes souffrent encore de limitations qui sont liées aux techniques d'exploitation de ces sources d'enrichissement. Lorsque les différentes sources sont utilisées de telle sorte qu'elles ne peuvent être distinguées par le système, cela limite la flexibilité des modèles d'exploration qui peuvent être appliqués aux résultats de recherche retournés par ce système. Les utilisateurs se sentent alors perdus devant ces résultats, et se retrouvent dans l'obligation de les filtrer manuellement pour sélectionner l'information pertinente. S'ils veulent aller plus loin, ils doivent reformuler et cibler encore plus leurs requêtes de recherche jusqu'à parvenir aux documents qui répondent le mieux à leurs attentes. De cette façon, même si les systèmes parviennent à retrouver davantage des résultats pertinents, leur présentation reste problématique. Afin de cibler la recherche à des besoins d'information plus spécifiques de l'utilisateur et améliorer la pertinence et l'exploration de ses résultats de recherche, les SRIs avancés adoptent différentes techniques de personnalisation de données qui supposent que la recherche actuelle d'un utilisateur est directement liée à son profil et/ou à ses expériences de navigation/recherche antérieures. Cependant, cette hypothèse ne tient pas dans tous les cas, les besoins de l'utilisateur évoluent au fil du temps et peuvent s'éloigner de ses intérêts antérieurs stockés dans son profil.
    Dans d'autres cas, le profil de l'utilisateur peut être mal exploité pour extraire ou inférer ses nouveaux besoins en information. Ce problème est beaucoup plus accentué avec les requêtes ambigües. Lorsque plusieurs centres d'intérêt auxquels est liée une requête ambiguë sont identifiés dans le profil de l'utilisateur, le système se voit incapable de sélectionner les données pertinentes depuis ce profil pour répondre à la requête. Ceci a un impact direct sur la qualité des résultats fournis à cet utilisateur. Afin de remédier à quelques-unes de ces limitations, nous nous sommes intéressés dans ce cadre de cette thèse de recherche au développement de techniques destinées principalement à l'amélioration de la pertinence des résultats des SRIs actuels et à faciliter l'exploration de grandes collections de documents. Pour ce faire, nous proposons une solution basée sur un nouveau concept d'indexation et de recherche d'information appelé la projection multi-espaces. Cette proposition repose sur l'exploitation de différentes catégories d'information sémantiques et sociales qui permettent d'enrichir l'univers de représentation des documents et des requêtes de recherche en plusieurs dimensions d'interprétations. L'originalité de cette représentation est de pouvoir distinguer entre les différentes interprétations utilisées pour la description et la recherche des documents. Ceci donne une meilleure visibilité sur les résultats retournés et aide à apporter une meilleure flexibilité de recherche et d'exploration, en donnant à l'utilisateur la possibilité de naviguer une ou plusieurs vues de données qui l'intéressent le plus. En outre, les univers multidimensionnels de représentation proposés pour la description des documents et l'interprétation des requêtes de recherche aident à améliorer la pertinence des résultats de l'utilisateur en offrant une diversité de recherche/exploration qui aide à répondre à ses différents besoins et à ceux des autres différents utilisateurs. Cette étude exploite différents aspects liés à la recherche personnalisée et vise à résoudre les problèmes engendrés par l'évolution des besoins en information de l'utilisateur. Ainsi, lorsque le profil de cet utilisateur est utilisé par notre système, une technique est proposée et employée pour identifier les intérêts les plus représentatifs de ses besoins actuels dans son profil. Cette technique se base sur la combinaison de trois facteurs influents, notamment le facteur contextuel, fréquentiel et temporel des données. La capacité des utilisateurs à interagir, à échanger des idées et d'opinions, et à former des réseaux sociaux sur le Web, a amené les systèmes à s'intéresser aux types d'interactions de ces utilisateurs, au niveau d'interaction entre eux ainsi qu'à leurs rôles sociaux dans le système. Ces informations sociales sont abordées et intégrées dans ce travail de recherche. L'impact et la manière de leur intégration dans le processus de RI sont étudiés pour améliorer la pertinence des résultats.
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  3. Pfäffli, W.: ¬La qualité des résultats de recherche dans le cadre du projet MACS (Multilingual Access to Subjects) : vers un élargissement des ensembles de résultats de recherche (2009) 0.06
    0.060479235 = product of:
      0.12095847 = sum of:
        0.0022667185 = weight(_text_:a in 2818) [ClassicSimilarity], result of:
          0.0022667185 = score(doc=2818,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.044588212 = fieldWeight in 2818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2818)
        0.11869175 = weight(_text_:et in 2818) [ClassicSimilarity], result of:
          0.11869175 = score(doc=2818,freq=20.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.5737617 = fieldWeight in 2818, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2818)
      0.5 = coord(2/4)
    
    Abstract
    Cette étude aborde la problématique de la qualité des résultats de recherche obtenus par l'intermédiaire de liens établis dans le cadre du projet MACS (Multilingual Access to Subjects) en considérant plus particulièrement la perspective de l'usager. Elle cherche à démontrer que ces liens, dans leur définition actuelle, ne sont à eux seuls pas en mesure de garantir des résultats satisfaisants pour un usager et qu'ils doivent être complétés par d'autres mesures. Elle se compose de trois parties principales : - la première partie présente le contexte général : après un bref historique, les principes de base du projet MACS et les difficultés rencontrées lors de l'évaluation de résultats de recherche sont expliqués. La question des différentes perspectives de l'indexeur et de l'usager est plus particulièrement développée. - la seconde partie présente les tests sur les titres communs à plusieurs bibliothèques qui ont été effectués et énumère les différents facteurs qui affaiblissent la qualité des résultats et empêchent notamment l'usager de retrouver des titres pertinents. - la troisième partie contient quelques pistes susceptibles de remédier aux biais relevés dans la deuxième partie et s'interroge sur les caractéristiques d'une interface de recherche, qui permettraient d'améliorer une recherche thématique multilingue.
    Conclusion Le tout premier point de départ de cette étude était le principe de validation des liens par la cohérence des résultats. Nous avons vu que ce principe jour un rôle très important dans la problématique générale de l'interopérabilité entre systèmes documentaires, bien qu'il ne soit pas sans soulever de nombreuses questions pratiques lors de sa mise en oeuvre concrète, questions auxquelles aucune étude n'a pour le moment offert de réponse détaillée qui puisse servir à élaborer un début de méthodologie. Mais nous avons surtout vu lors de l'étude d'exemples concrets que nous nous mouvons dans un contexte influencé par de nombreux facteurs, en grande partie, peu ou difficilement prévisibles : vouloir obtenir deux ensembles de titres pertinents clairement définis, en devant tenir compte du contexte culturel des fonds qui sont comparés, des différences de structure des langages documentaires, des politiques d'indexation, de la subjectivité des indexeurs et enfin des paramètres des moteurs de recherche, relève de la gageure !
    L'examen des titres communs nous a montré qu'en tous les cas, une partie des titres pertinents échapperaient à une requête effectuée par l'intermédiaire du lien. Il nous semble donc plus important que les efforts se concentrent sur les moyens d'effectivement donner un accès à des documents potentiellement pertinents plutôt que de définir plus précisément la qualité des liens au vu des résultats. Une première voie est le recours aux relations hiérarchiques des langages documentaires, mais nous avons vu qu'elles ne sont pas en mesure d'apporter une solution dans tous les cas. Le recours à une classification, à une ontologie ou à des techniques de traitement automatique du langage sont d'autres voies à explorer, qui peuvent éviter de devoir multiplier les liens, et par là compliquer encore leur gestion. En chemin, nous avons rencontré , mais sans pouvoir les aborder, encore bien d'autres questions, qui sont toutes autant de défis supplémentaires à relever, comme le problème de l'accès aux titres non indexés ou le problème de l'évolution des langages documentaires et donc de la mise à jour des liens. Nous avons aussi laissé de côté les questions techniques de l'accès de l'interface aux différents catalogues et des possibilités de présentations des résultats de recherche proprement dits (par bibliothèque interrogée ou réunis en un ensemble, ranking). Il reste ainsi assez à faire jusqu'au jour où un usager pourra entrer un terme de recherche dans une interface conviviale, qui lui ouvrira un accès thématique simple mais complet aux ressources des bibliothèques d'Europe, puis du monde !
  4. Boutin, E.: ¬La recherche d'information sur Internet au prisme de la théorie des facettes (2008) 0.05
    0.04965232 = product of:
      0.19860928 = sum of:
        0.19860928 = weight(_text_:et in 2800) [ClassicSimilarity], result of:
          0.19860928 = score(doc=2800,freq=14.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.96008694 = fieldWeight in 2800, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2800)
      0.25 = coord(1/4)
    
    Abstract
    Les occasions sont rares pour un chercheur de porter un regard réflexif sur sa production scientifique. L'objet de ce préambule est précisément de poser les valises et de regarder le chemin parcouru. Nous proposons d'analyser ce chemin à travers trois prismes : - Positionnement et évolution de la recherche en Sciences de l'Information et de la Communication (SIC) - Cohérence du parcours et dynamique de recherche - Collaborations scientifiques suscitées par cette recherche Chacun de ces prismes offre une grille de lecture possible et permet d'éclairer le présent document.
    Content
    Habilitation à Diriger des Recherches, Discipline : Sciences de l'Information et de la Communication, Laboratoire I3M. - Présentée et soutenue publiquement le 9 Octobre 2008.
  5. Schulz, T.: Konzeption und prototypische Entwicklung eines Thesaurus für IT-Konzepte an Hochschulen (2021) 0.04
    0.039598607 = product of:
      0.07919721 = sum of:
        0.053619467 = weight(_text_:et in 429) [ClassicSimilarity], result of:
          0.053619467 = score(doc=429,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.2591991 = fieldWeight in 429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
        0.025577743 = product of:
          0.051155485 = sum of:
            0.051155485 = weight(_text_:al in 429) [ClassicSimilarity], result of:
              0.051155485 = score(doc=429,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.25317356 = fieldWeight in 429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In Hochschulen besteht derzeit ein großes Anliegen, die Digitalisierung effektiv und effizient für Prozesse in Steuerungsbereichen zu nutzen. Dabei steht die IT-Governance im Mittelpunkt der hochschulpolitischen Überlegungen und beinhaltet "die interne Steuerung und Koordination von Entscheidungsprozessen in Bezug auf IT-Steuerung beziehungsweise Digitalisierungsmaßnahmen."(Christmann-Budian et al. 2018) Strategisch kann die Bündelung von Kompetenzen in der deutschen Hochschullandschaft dabei helfen, die steigenden Anforderungen an die IT-Governance zu erfüllen. Passend zu diesem Ansatz realisieren aktuell im ZDT zusammengeschlossene Hochschulen das Projekt "IT-Konzepte - Portfolio gemeinsamer Vorlagen und Muster". Das Projekt schließt an die Problemstellung an, indem Kompetenzen gebündelt und alle Hochschulen befähigt werden, IT-Konzepte erarbeiten und verabschieden zu können. Dazu wird ein Portfolio an Muster und Vorlagen als Ressourcenpool zusammengetragen und referenziert, um eine Nachvollziehbarkeit der Vielfalt an Herausgebern gewährleisten zu können (Meister 2020). Um den Ressourcenpool, welcher einen Body of Knowledge (BoK) darstellt, effizient durchsuchen zu können, ist eine sinnvolle Struktur unabdinglich. Daher setzt sich das Ziel der Bachelorarbeit aus der Analyse von hochschulinternen Dokumenten mithilfe von Natural Language Processing (NLP) und die daraus resultierende Entwicklung eines Thesaurus-Prototyps für IT-Konzepte zusammen. Dieser soll im Anschluss serialisiert und automatisiert werden, um diesen fortlaufend auf einem aktuellen Stand zu halten. Es wird sich mit der Frage beschäftigt, wie ein Thesaurus nachhaltig technologisch, systematisch und konsistent erstellt werden kann, um diesen Prozess im späteren Verlauf als Grundlage für weitere Themenbereiche einzuführen.
  6. Bickmann, H.-J.: Synonymie und Sprachverwendung : Verfahren zur Ermittlung von Synonymenklassen als kontextbeschränkten Äquivalenzklassen (1978) 0.03
    0.030331751 = product of:
      0.121327005 = sum of:
        0.121327005 = weight(_text_:et in 5890) [ClassicSimilarity], result of:
          0.121327005 = score(doc=5890,freq=4.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.58650064 = fieldWeight in 5890, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=5890)
      0.25 = coord(1/4)
    
    Classification
    ET 475
    RVK
    ET 475
  7. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.03
    0.027491506 = product of:
      0.054983012 = sum of:
        0.043765664 = product of:
          0.17506266 = sum of:
            0.17506266 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.17506266 = score(doc=4997,freq=2.0), product of:
                0.37378725 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044089027 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.25 = coord(1/4)
        0.011217348 = weight(_text_:a in 4997) [ClassicSimilarity], result of:
          0.011217348 = score(doc=4997,freq=24.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.22065444 = fieldWeight in 4997, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(2/4)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  8. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.03
    0.026259398 = product of:
      0.10503759 = sum of:
        0.10503759 = product of:
          0.42015037 = sum of:
            0.42015037 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.42015037 = score(doc=973,freq=2.0), product of:
                0.37378725 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044089027 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  9. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.03
    0.025848763 = product of:
      0.051697526 = sum of:
        0.043765664 = product of:
          0.17506266 = sum of:
            0.17506266 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.17506266 = score(doc=855,freq=2.0), product of:
                0.37378725 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044089027 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.25 = coord(1/4)
        0.007931862 = weight(_text_:a in 855) [ClassicSimilarity], result of:
          0.007931862 = score(doc=855,freq=12.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15602624 = fieldWeight in 855, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(2/4)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  10. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.03
    0.025440529 = product of:
      0.050881058 = sum of:
        0.009066874 = weight(_text_:a in 5309) [ClassicSimilarity], result of:
          0.009066874 = score(doc=5309,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17835285 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=5309)
        0.041814182 = product of:
          0.083628364 = sum of:
            0.083628364 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.083628364 = score(doc=5309,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    5. 8.2006 13:07:22
  11. Mönnich, M.W.: Personalcomputer in Bibliotheken : Arbeitsplätze für Benutzer (1991) 0.02
    0.02445111 = product of:
      0.04890222 = sum of:
        0.0054953555 = weight(_text_:a in 5389) [ClassicSimilarity], result of:
          0.0054953555 = score(doc=5389,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.10809815 = fieldWeight in 5389, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5389)
        0.043406866 = product of:
          0.08681373 = sum of:
            0.08681373 = weight(_text_:al in 5389) [ClassicSimilarity], result of:
              0.08681373 = score(doc=5389,freq=4.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.42964977 = fieldWeight in 5389, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5389)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    ASB
    Al
    Classification
    Bib A 589 Kleincomputer
    Al
    SBB
    Bib A 589 Kleincomputer
  12. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.023001622 = product of:
      0.046003245 = sum of:
        0.035012532 = product of:
          0.14005013 = sum of:
            0.14005013 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14005013 = score(doc=701,freq=2.0), product of:
                0.37378725 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044089027 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.010990711 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.010990711 = score(doc=701,freq=36.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  13. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.021602262 = product of:
      0.043204524 = sum of:
        0.035012532 = product of:
          0.14005013 = sum of:
            0.14005013 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14005013 = score(doc=5820,freq=2.0), product of:
                0.37378725 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044089027 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.25 = coord(1/4)
        0.0081919925 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
          0.0081919925 = score(doc=5820,freq=20.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.16114321 = fieldWeight in 5820, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  14. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.02055905 = product of:
      0.0411181 = sum of:
        0.007327141 = weight(_text_:a in 4204) [ClassicSimilarity], result of:
          0.007327141 = score(doc=4204,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.14413087 = fieldWeight in 4204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
        0.03379096 = product of:
          0.06758192 = sum of:
            0.06758192 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.06758192 = score(doc=4204,freq=4.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Description of an experimental trend-predicting exercise covering a time period as far as 50 years into the future. The Delphi technique is used in soliciting the opinions of experts in six areas: scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapon systems. Possible objections to the approach are also discussed.
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  15. Thielemann, A.: Sacherschließung für die Kunstgeschichte : Möglichkeiten und Grenzen von DDC 700: The Arts (2007) 0.01
    0.014537444 = product of:
      0.029074889 = sum of:
        0.0051810704 = weight(_text_:a in 1409) [ClassicSimilarity], result of:
          0.0051810704 = score(doc=1409,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.10191591 = fieldWeight in 1409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1409)
        0.023893818 = product of:
          0.047787637 = sum of:
            0.047787637 = weight(_text_:22 in 1409) [ClassicSimilarity], result of:
              0.047787637 = score(doc=1409,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30952093 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Nach der Veröffentlichung einer deutschen Übersetzung der Dewey Decimal Classification 22 im Oktober 2005 und ihrer Nutzung zur Inhaltserschließung in der Deutschen Nationalbibliographie seit Januar 2006 stellt sich aus Sicht der deutschen kunsthistorischen Spezialbibliotheken die Frage nach einer möglichen Verwendung der DDC und ihrer generellen Eignung zur Inhalterschließung kunsthistorischer Publikationen. Diese Frage wird vor dem Hintergrund der bestehenden bibliothekarischen Strukturen für die Kunstgeschichte sowie mit Blick auf die inhaltlichen Besonderheiten, die Forschungsmethodik und die publizistischen Traditionen dieses Faches erörtert.
  16. Schmidt, S.: Kritische Untersuchnung der Neubearbeitungen unter dem Gesichtspunkt der Eignung beider Systeme als Grundlage für eine Normklassifikation in Öffentlichen Bibliotheken : ¬Die Klassifikationen der Stadtbibliotheken Duisburg und Hannover (1974) 0.01
    0.012788871 = product of:
      0.051155485 = sum of:
        0.051155485 = product of:
          0.10231097 = sum of:
            0.10231097 = weight(_text_:al in 5257) [ClassicSimilarity], result of:
              0.10231097 = score(doc=5257,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.5063471 = fieldWeight in 5257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5257)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Signature
    Al 12b Sys -Dui-
  17. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.012325386 = product of:
      0.024650771 = sum of:
        0.0067304084 = weight(_text_:a in 563) [ClassicSimilarity], result of:
          0.0067304084 = score(doc=563,freq=6.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.13239266 = fieldWeight in 563, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017920362 = product of:
          0.035840724 = sum of:
            0.035840724 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.035840724 = score(doc=563,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  18. Bertram, J.: Informationen verzweifelt gesucht : Enterprise Search in österreichischen Großunternehmen (2011) 0.01
    0.01217876 = product of:
      0.02435752 = sum of:
        0.003238169 = weight(_text_:a in 2657) [ClassicSimilarity], result of:
          0.003238169 = score(doc=2657,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.06369744 = fieldWeight in 2657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2657)
        0.02111935 = product of:
          0.0422387 = sum of:
            0.0422387 = weight(_text_:22 in 2657) [ClassicSimilarity], result of:
              0.0422387 = score(doc=2657,freq=4.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.27358043 = fieldWeight in 2657, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2657)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die Arbeit geht dem Status quo der unternehmensweiten Suche in österreichischen Großunternehmen nach und beleuchtet Faktoren, die darauf Einfluss haben. Aus der Analyse des Ist-Zustands wird der Bedarf an Enterprise-Search-Software abgeleitet und es werden Rahmenbedingungen für deren erfolgreiche Einführung skizziert. Die Untersuchung stützt sich auf eine im Jahr 2009 durchgeführte Onlinebefragung von 469 österreichischen Großunternehmen (Rücklauf 22 %) und daran anschließende Leitfadeninterviews mit zwölf Teilnehmern der Onlinebefragung. Der theoretische Teil verortet die Arbeit im Kontext des Informations- und Wissensmanagements. Der Fokus liegt auf dem Ansatz der Enterprise Search, ihrer Abgrenzung gegenüber der Suche im Internet und ihrem Leistungsspektrum. Im empirischen Teil wird zunächst aufgezeigt, wie die Unternehmen ihre Informationen organisieren und welche Probleme dabei auftreten. Es folgt eine Analyse des Status quo der Informati-onssuche im Unternehmen. Abschließend werden Bekanntheit und Einsatz von Enterprise-Search-Software in der Zielgruppe untersucht sowie für die Einführung dieser Software nötige Rahmenbedingungen benannt. Defizite machen die Befragten insbesondere im Hinblick auf die übergreifende Suche im Unternehmen und die Suche nach Kompetenzträgern aus. Hier werden Lücken im Wissensmanagement offenbar. 29 % der Respondenten der Onlinebefragung geben zu-dem an, dass es in ihren Unternehmen gelegentlich bis häufig zu Fehlentscheidungen infolge defizitärer Informationslagen kommt. Enterprise-Search-Software kommt in 17 % der Unternehmen, die sich an der Onlinebefragung beteiligten, zum Einsatz. Die durch Enterprise-Search-Software bewirkten Veränderungen werden grundsätzlich posi-tiv beurteilt. Alles in allem zeigen die Ergebnisse, dass Enterprise-Search-Strategien nur Erfolg haben können, wenn man sie in umfassende Maßnahmen des Informations- und Wissensmanagements einbettet.
    Date
    22. 1.2016 20:40:31
    Location
    A
  19. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.01
    0.011946909 = product of:
      0.047787637 = sum of:
        0.047787637 = product of:
          0.09557527 = sum of:
            0.09557527 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.09557527 = score(doc=5815,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    4. 8.2006 21:35:22
  20. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.01
    0.011087202 = product of:
      0.022174403 = sum of:
        0.0072407667 = weight(_text_:a in 642) [ClassicSimilarity], result of:
          0.0072407667 = score(doc=642,freq=10.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.14243183 = fieldWeight in 642, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=642)
        0.014933636 = product of:
          0.029867273 = sum of:
            0.029867273 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.029867273 = score(doc=642,freq=2.0), product of:
                0.15439226 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044089027 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The past decade has seen the emergence of a new paradigm in the corporate world where organisations emphasised connectivity as a means of exposing decision-makers to wider resources of information within and outside the organisation. Many organisations followed the initiatives of enhancing infrastructures, manipulating cultural shifts and emphasising managerial commitment for creating pools and networks of knowledge. However, the concept of connectivity is not merely presenting people with the data, but more importantly, to create environments where people can seek information efficiently. This paradigm has therefore caused a shift in the function of information systems in organisations. They have to be now assessed in relation to how they underpin people's information-seeking activities within the context of their organisational environment. This research project used interpretative research methods to investigate the nature of people's information-seeking activities at two culturally contrasting organisations. Outcomes of this research project provide insights into phenomena associated with people's information-seeking function, and show how they depend on the organisational context that is defined partly by information systems. It suggests that information-seeking is not just searching for data. The inefficiencies inherent in both people and their environments can bring opaqueness into people's data, which they need to avoid or eliminate as part of seeking information. This seems to have made information-seeking a two-tier process consisting of a primary process of searching and interpreting data and auxiliary process of avoiding and eliminating opaqueness in data. Based on this view, this research suggests that organisational information systems operate naturally as implicit dual-mechanisms to underpin the above two-tier process, and that improvements to information systems should concern maintaining the balance in these dual-mechanisms.
    Date
    22. 7.2022 12:16:58

Languages

  • d 113
  • e 39
  • f 3
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types