Search (654 results, page 32 of 33)

  • × type_ss:"x"
  1. Zudnik, J.: Artifizielle Semantik : Wider das Chinesische Zimmer (2017) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 4426) [ClassicSimilarity], result of:
          0.012581941 = score(doc=4426,freq=8.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 4426, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4426)
      0.06666667 = coord(1/15)
    
    Abstract
    "Talks at Google" hatte kürzlich einen Star zu Gast (Google 2016). Der gefeierte Philosoph referierte in gewohnt charmanter Art sein berühmtes Gedankenexperiment, welches er vor 35 Jahren ersonnen hatte. Aber es war keine reine Geschichtslektion, sondern er bestand darauf, daß die Implikationen nach wie vor Gültigkeit besaßen. Die Rede ist natürlich von John Searle und dem Chinesischen Zimmer. Searle eroberte damit ab 1980 die Welt der Philosophie des Geistes, indem er bewies, daß man Computer besprechen kann, ohne etwas von ihnen zu verstehen. In seinen Worten, man könne ohnehin die zugrunde liegenden Konzepte dieser damned things in 5 Minuten erfassen. Dagegen verblassten die scheuen Einwände des AI-Starapologeten Ray Kurzweil der im Publikum saß, die jüngste Akquisition in Googles Talentpool. Searle wirkte wie die reine Verkörperung seiner Thesen, daß Berechnung, Logik und harte Fakten angesichts der vollen Entfaltung polyvalenter Sprachspiele eines menschlichen Bewußtseins im sozialen Raum der Kultur keine Macht über uns besitzen. Doch obwohl große Uneinigkeit bezüglich der Gültigkeit des chinesischen Zimmers besteht, und die logische Struktur des Arguments schon vor Jahrzehnten widerlegt worden ist, u. a. von Copeland (1993), wird erstaunlicherweise noch immer damit gehandelt. Es hat sich von einem speziellen Werkzeug zur Widerlegung der Starken AI These, wonach künstliche Intelligenz mit einer symbolverarbeitenden Rechenmaschine geschaffen werden kann, zu einem Argument für all die Fälle entwickelt, in welchen sich Philosophen des Geistes mit unbequemen Fragen bezüglich der Berechenbarkeit des menschlichen Geistes auseinandersetzen hätten können. Es ist also mit den Jahrzehnten zu einer Immunisierungs- und Konservierungsstrategie für all jene geworden, die sich Zeit erkaufen wollten, sich mit der wirklichen Komplexität auseinander zu setzen. Denn die Definition von Sinn ist eben plastisch, vor allem wenn die Pointe der Searlschen Geschichte noch immer eine hohe Suggestionskraft besitzt, da ihre Konklusion, man könne nicht von einer computationalen Syntax zu einer Semantik kommen, noch immer unzureichend widerlegt ist.
  2. Riebe, U.: John R. Searles Position zum Leib-Seele-Problem (2008) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 4567) [ClassicSimilarity], result of:
          0.012581941 = score(doc=4567,freq=8.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 4567, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4567)
      0.06666667 = coord(1/15)
    
    Abstract
    Wenig ist heute für den gebildeten Bürger interessanter, als die aktuellen Erkenntnisse der Neurowissenschaften zu verfolgen. Letztere ermöglichen durch bildgebende Verfahren wie z. B. EEG, fMRT oder MEG, dem Menschen "beim Denken zuzusehen". So heißt es zumindest in den Medien. Aktuelle Forschungsberichte zeigen eine Näherung an diese Sichtweise. Kalifornischen Forschern ist es durch eine Hirnmessung jüngst gelungen, mit groer Wahrscheinlichkeit zu erkennen, welches Bild eine Versuchsperson gerade betrachtet. Dazu wurden der Versuchsperson erst 1.750 Bilder mit Naturmotiven gezeigt und die jeweilige Stimulation im Hirn per fMRT gemessen. Geachtet wurde speziell auf visuelle Areale, die in eine dreidimensionale Matrix transformiert wurden. Die einzelnen Segmente heissen Voxel (analog zu zweidimensionalen Pixeln). So entstand eine Datenbank aus Voxel-Aktivitätsmustern. Im folgenden Durchlauf wurden der Versuchsperson 120 neue Bilder gezeigt und anhand der Datenbank die wahrscheinliche Voxel-Aktivität berechnet. Vorausgesagt wurde dann das Bild, dessen tatsächliches Voxel-Muster mit dem berechneten am meisten übereinstimmte. Bei Versuchsperson A wurde eine Trefferquote von 92% erreicht, bei Versuchsperson B immerhin 72%. Die Forscher folgern optimistisch, dass es über ihren Ansatz möglich sein wird, gesehene Bildeindrücke über Hirnmessungen zu rekonstruieren. Hier wird versucht auf Kants Frage "Was ist der Mensch?" auf materialistische Weise näher zu kommen. Im Bezug auf frühere Experimente von Benjamin Libet schließen heutzutage einige Hirnforscher, dass das bewusste Erleben eines Menschen nur Beiwerk von deterministisch ablaufenden Hirnprozessen ist, weil das Erleben neuronaler Aktivität zeitlich hinterherhinkt. Auch wird gefolgert, dass empfundene Willensfreiheit nur eine Illusion ist, obwohl Libet diese harte Schlussfolgerung nicht zieht. Die Ergebnisse solcher Studien sind zwar hochinteressant, doch muss man bei der Interpretation auch hohe Sorgfalt walten lassen, insbesondere wenn es um das Thema Bewusstsein geht. Von philosophischer Seite her hat sich John Searle intensiv mit dem Thema auseinandergesetzt und eine Theorie entwickelt, die alle bisherigen philosophischen Modelle verwirft.
    Imprint
    Konstanz : Fachbebreich Philosophie / Kunst- und Medienwissenschaft / Informatik
  3. Bickmann, H.-J.: Synonymie und Sprachverwendung : Verfahren zur Ermittlung von Synonymenklassen als kontextbeschränkten Äquivalenzklassen (1978) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 5890) [ClassicSimilarity], result of:
          0.012581941 = score(doc=5890,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 5890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=5890)
      0.06666667 = coord(1/15)
    
  4. Kirk, J.: Theorising information use : managers and their work (2002) 0.00
    8.300676E-4 = product of:
      0.012451014 = sum of:
        0.012451014 = product of:
          0.024902027 = sum of:
            0.024902027 = weight(_text_:information in 560) [ClassicSimilarity], result of:
              0.024902027 = score(doc=560,freq=26.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.4895196 = fieldWeight in 560, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Theme
    Information
  5. Temath, C.: Prototypische Implementierung der "Topic Map Query Language"-Abfragesprache für die Groupware-basierte Topic Map Engine (2005) 0.00
    7.41398E-4 = product of:
      0.011120969 = sum of:
        0.011120969 = weight(_text_:und in 200) [ClassicSimilarity], result of:
          0.011120969 = score(doc=200,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.17315367 = fieldWeight in 200, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=200)
      0.06666667 = coord(1/15)
    
    Abstract
    Die folgende Dokumentation beschäftigt sich mit den Ergebnissen der Seminararbeit zum Thema "Prototypische Implementierung der "Topic Map Query Language"-Abfragesprache für die Groupware-basierte Topic Map Engine", die im Rahmen des Seminars Wirtschaftsinformatik II am Groupware Competence Center entstanden ist. Im Rahmen des Dissertationsprojektes "K-Discovery" von Stefan Smolnik am Groupware Competence Center entstand der Prototyp einer Groupware-basierten Topic Map Engine. Diese Umgebung stellt verschiedene Werkzeuge zur Modellierung, Erstellung und Visualisierung von Topic Maps in einem Groupware-basierten Umfeld zur Verfügung. So reichen die vorhandenen Werkzeuge von einem grafischen Modellierungswerkzeug für die Erstellung von Topic Maps, bis hin zu Suchwerkzeugen, die grafisch oder textbasiert die Suche nach Informationen erleichtern. Zusätzlich existiert eine Exportschnittstelle, die es ermöglicht, die Daten der erzeugten Topic Map in ein standardisiertes XML-Format, dem XML Topic Maps (XTM) Format, zu exportieren. Dies stellt eine erste, rudimentäre Schnittstelle zum Abfragen von Topic Map Informationen für die Groupwarebasierte Topic Map Engine (GTME) dar. Im Rahmen internationaler Standardisierungsbemühungen wird zurzeit an einem Abfragestandard für Topic Maps gearbeitet, der so genannten "Topic Map Query Language (TMQL)"-Abfragesprache. Ziel dieser Arbeit ist es nun, einen Überblick über den aktuellen Stand des Standardisierungsprozesses für die TMQL-Abfragesprache aufzuzeigen und basierend auf den im Standardisierungsprozess bisher erarbeiteten Ergebnissen eine prototypische Implementierung für die Groupware-basierte Topic Map Engine zu erstellen. Das Ziel ist demnach eine standardisierte Schnittstelle zum Abfragen von Topic Map Daten zu schaffen, um die Groupware-basierte Topic Map Engine einem neuen Anwendungsspektrum zugänglich zu machen.
  6. Hans, J.-G.: Dateiorganisation im Information Retrieval mit Hilfe von Cluster-Analyse-Verfahren (1979) 0.00
    5.2621565E-4 = product of:
      0.0078932345 = sum of:
        0.0078932345 = product of:
          0.015786469 = sum of:
            0.015786469 = weight(_text_:information in 1213) [ClassicSimilarity], result of:
              0.015786469 = score(doc=1213,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.3103276 = fieldWeight in 1213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=1213)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
  7. Furniss, P.: ¬A study of the compatibility of two subject catalogues (1980) 0.00
    5.2621565E-4 = product of:
      0.0078932345 = sum of:
        0.0078932345 = product of:
          0.015786469 = sum of:
            0.015786469 = weight(_text_:information in 1945) [ClassicSimilarity], result of:
              0.015786469 = score(doc=1945,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.3103276 = fieldWeight in 1945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=1945)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Imprint
    Sheffield : Sheffield Univ., Postgraduate School of Librarianship and Information Science
  8. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.00
    4.6511332E-4 = product of:
      0.0069766995 = sum of:
        0.0069766995 = product of:
          0.013953399 = sum of:
            0.013953399 = weight(_text_:information in 1172) [ClassicSimilarity], result of:
              0.013953399 = score(doc=1172,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27429342 = fieldWeight in 1172, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    RSWK
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
    Subject
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
  9. Habermann, K.: Wissensrepräsentation im Rahmen von Wissensmanagement (1999) 0.00
    4.604387E-4 = product of:
      0.00690658 = sum of:
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 1515) [ClassicSimilarity], result of:
              0.01381316 = score(doc=1515,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information Resources Management
  10. Thornton, K: Powerful structure : inspecting infrastructures of information organization in Wikimedia Foundation projects (2016) 0.00
    4.4124527E-4 = product of:
      0.0066186786 = sum of:
        0.0066186786 = product of:
          0.013237357 = sum of:
            0.013237357 = weight(_text_:information in 3288) [ClassicSimilarity], result of:
              0.013237357 = score(doc=3288,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2602176 = fieldWeight in 3288, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3288)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This dissertation investigates the social and technological factors of collaboratively organizing information in commons-based peer production systems. To do so, it analyzes the diverse strategies that members of Wikimedia Foundation (WMF) project communities use to organize information. Key findings from this dissertation show that conceptual structures of information organization are encoded into the infrastructure of WMF projects. The fact that WMF projects are commons-based peer production systems means that we can inspect the code that enables these systems, but a specific type of technical literacy is required to do so. I use three methods in this dissertation. I conduct a qualitative content analysis of the discussions surrounding the design, implementation and evaluation of the category system; a quantitative analysis using descriptive statistics of patterns of editing among editors who contributed to the code of templates for information boxes; and a close reading of the infrastructure used to create the category system, the infobox templates, and the knowledge base of structured data.
  11. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.00
    4.3631496E-4 = product of:
      0.006544724 = sum of:
        0.006544724 = product of:
          0.013089448 = sum of:
            0.013089448 = weight(_text_:information in 694) [ClassicSimilarity], result of:
              0.013089448 = score(doc=694,freq=22.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.25731003 = fieldWeight in 694, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=694)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  12. Strong, R.W.: Undergraduates' information differentiation behaviors in a research process : a grounded theory approach (2005) 0.00
    4.1601E-4 = product of:
      0.00624015 = sum of:
        0.00624015 = product of:
          0.0124803 = sum of:
            0.0124803 = weight(_text_:information in 5985) [ClassicSimilarity], result of:
              0.0124803 = score(doc=5985,freq=20.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2453355 = fieldWeight in 5985, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This research explores, using a Grounded Theory approach, the question of how a particular group of undergraduate university students differentiates the values of retrieved information in a contemporary research process. Specifically it attempts to isolate and label those specific techniques, processes, formulae-both objective and subjective-that the students use to identify, prioritize, and successfully incorporate the most useful and valuable information into their research project. The research reviews the relevant literature covering the areas of: epistemology, knowledge acquisition, and cognitive learning theory; early relevance research; the movement from relevance models to information seeking in context; and the proximate recent research. A research methodology is articulated using a Grounded Theory approach, and the research process and research participants are fully explained and described. The findings of the research are set forth using three Thematic Sets- Traditional Relevance Measures; Structural Frames; and Metaphors: General and Ecological-using the actual discourse of the study participants, and a theoretical construct is advanced. Based on that construct, it can be theorized that identification and analysis of the metaphorical language that the particular students in this study used, both by way of general and ecological metaphors-their stories-about how they found, handled, and evaluated information, can be a very useful tool in understanding how the students identified, prioritized, and successfully incorporated the most useful and relevant information into their research projects. It also is argued that this type of metaphorical analysis could be useful in providing a bridging mechanism for a broader understanding of the relationships between traditional user relevance studies and the concepts of frame theory and sense-making. Finally, a corollary to Whitmire's original epistemological hypothesis is posited: Students who were more adept at using metaphors-either general or ecological-appeared more comfortable with handling contradictory information sources, and better able to articulate their valuing decisions. The research concludes with a discussion of the implications for both future research in the Library and Information Science field, and for the practice of both Library professionals and classroom instructors involved in assisting students involved in information valuing decision-making in a research process.
    Theme
    Information
  13. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.00
    3.946617E-4 = product of:
      0.0059199254 = sum of:
        0.0059199254 = product of:
          0.011839851 = sum of:
            0.011839851 = weight(_text_:information in 4728) [ClassicSimilarity], result of:
              0.011839851 = score(doc=4728,freq=18.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274568 = fieldWeight in 4728, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
  14. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.00
    3.946617E-4 = product of:
      0.0059199254 = sum of:
        0.0059199254 = product of:
          0.011839851 = sum of:
            0.011839851 = weight(_text_:information in 117) [ClassicSimilarity], result of:
              0.011839851 = score(doc=117,freq=18.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274568 = fieldWeight in 117, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  15. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 4839) [ClassicSimilarity], result of:
              0.011839852 = score(doc=4839,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 4839, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4839)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.
  16. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
              0.011839852 = score(doc=3829,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 3829, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  17. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.00
    3.7209064E-4 = product of:
      0.0055813594 = sum of:
        0.0055813594 = product of:
          0.011162719 = sum of:
            0.011162719 = weight(_text_:information in 1742) [ClassicSimilarity], result of:
              0.011162719 = score(doc=1742,freq=16.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.21943474 = fieldWeight in 1742, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1742)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
    Content
    Inhalt: INFORMATION LANGUAGES: A LINGUISTIC APPROACH MULTILINGUAL ASPECTS IN INFORMATION STORAGE AND RETRIEVAL COMPATIBILITY AND CONVERTIBILITY OF INFORMATION LANGUAGES CURRENT TRENDS IN MULTILINGUAL ACCESS BUILDING UDC-BASED MULTILINGUAL THESAURI ONLINE APPLICATIONS OF THE UDC-BASED MULTILINGUAL THESAURI THE IMPACT OF SPECIFICITY ON THE RETRIEVAL POWER OF A UDC-BASED MULTILINGUAL THESAURUS FINAL REMARKS AND GENERAL CONCLUSIONS Proefschrift voorgelegd tot het behalen van de graad van doctor in de Taal- en Letterkunde aan de Universiteit Antwerpen. - Vgl.: http://dlist.sir.arizona.edu/1862/.
  18. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.00
    3.7209064E-4 = product of:
      0.0055813594 = sum of:
        0.0055813594 = product of:
          0.011162719 = sum of:
            0.011162719 = weight(_text_:information in 4659) [ClassicSimilarity], result of:
              0.011162719 = score(doc=4659,freq=16.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.21943474 = fieldWeight in 4659, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
    Content
    Submitted to the Graduate Faculty of School of Information Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
  19. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.00
    3.7209064E-4 = product of:
      0.0055813594 = sum of:
        0.0055813594 = product of:
          0.011162719 = sum of:
            0.011162719 = weight(_text_:information in 2281) [ClassicSimilarity], result of:
              0.011162719 = score(doc=2281,freq=16.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.21943474 = fieldWeight in 2281, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
  20. Pfäffli, W.: ¬La qualité des résultats de recherche dans le cadre du projet MACS (Multilingual Access to Subjects) : vers un élargissement des ensembles de résultats de recherche (2009) 0.00
    3.6697328E-4 = product of:
      0.005504599 = sum of:
        0.005504599 = weight(_text_:und in 2818) [ClassicSimilarity], result of:
          0.005504599 = score(doc=2818,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.085706696 = fieldWeight in 2818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2818)
      0.06666667 = coord(1/15)
    
    Content
    Abschlussarbeit MAS Bibliotheks- und Informationswissenschaften 2007-2009

Years

Languages

  • d 603
  • e 42
  • f 2
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 29
  • m 19
  • r 2
  • a 1
  • More… Less…

Themes

Subjects

Classifications