Search (924 results, page 2 of 47)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.04
    0.035024326 = product of:
      0.08756082 = sum of:
        0.01563882 = weight(_text_:of in 436) [ClassicSimilarity], result of:
          0.01563882 = score(doc=436,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23940048 = fieldWeight in 436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=436)
        0.071922 = product of:
          0.143844 = sum of:
            0.143844 = weight(_text_:mind in 436) [ClassicSimilarity], result of:
              0.143844 = score(doc=436,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.5516817 = fieldWeight in 436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0625 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper discusses the use of wiki technology to provide a navigation structure for a collection of newspaper clippings. We overview the architecture of the wiki, discuss the navigation structure and pose the question: is the navigation structure an index, and if so, what type, or is it just a linkage structure or topic map. Does such a distinction really matter? Are these definitions in reality function based?
  2. Misra, G.; Prakash, A.: Kenneth J. Gergen and social constructionism : Editorial (2012) 0.03
    0.034088366 = product of:
      0.08522091 = sum of:
        0.070290476 = weight(_text_:philosophy in 742) [ClassicSimilarity], result of:
          0.070290476 = score(doc=742,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.30488142 = fieldWeight in 742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=742)
        0.014930432 = weight(_text_:of in 742) [ClassicSimilarity], result of:
          0.014930432 = score(doc=742,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.22855641 = fieldWeight in 742, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=742)
      0.4 = coord(2/5)
    
    Abstract
    Going beyond the insular orientation of psychology, Ken has often been crossing the disciplinary boundaries to create bridges and searching for shared spaces to foster dialogue. Building on developments in contemporary discourses in the philosophy of science, cultural studies, and interpretive inquiry, Ken has widened the net of psychological exploration and situated it in a culturally informed dynamic intellectual space. He has critically addressed many concepts and assumptions that are taken for granted by those who are educated in the positivist mould of knowledge creation, which has been informing the mainstream psychological investigations. The constructionist turn has been controversial. It has met with resistance and contested by those who, like physical scientists, subscribe to an essentialist view of reality and claim legitimacy for the scientifically produced and represented 'objective' knowledge. Positioned in such a scenario, Ken has indefatigably tried to demystify the conceptual, theoretical, and methodological implications of such knowledge claims by critiquing and offering empowering reconstructions In so doing, he has demonstrated an unparalleled intellectual courage and patient striving. Ken's early work has been dominated by a critical stance but in later works he has moved toward developing an alternative vision for social life characterized by joint action, performance, relational nature of constructed realities, and cultural inclusiveness.
  3. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.03337468 = product of:
      0.055624463 = sum of:
        0.01646295 = product of:
          0.08231475 = sum of:
            0.08231475 = weight(_text_:problem in 759) [ClassicSimilarity], result of:
              0.08231475 = score(doc=759,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46424055 = fieldWeight in 759, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.2 = coord(1/5)
        0.01935205 = weight(_text_:of in 759) [ClassicSimilarity], result of:
          0.01935205 = score(doc=759,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 759, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.039618924 = score(doc=759,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  4. Kratochwil, F.; Peltonen, H.: Constructivism (2022) 0.03
    0.031700842 = product of:
      0.0792521 = sum of:
        0.05623238 = weight(_text_:philosophy in 829) [ClassicSimilarity], result of:
          0.05623238 = score(doc=829,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.24390514 = fieldWeight in 829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=829)
        0.023019718 = weight(_text_:of in 829) [ClassicSimilarity], result of:
          0.023019718 = score(doc=829,freq=52.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.35238793 = fieldWeight in 829, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=829)
      0.4 = coord(2/5)
    
    Abstract
    Constructivism in the social sciences has known several ups and downs over the last decades. It was successful rather early in sociology but hotly contested in International Politics/Relations (IR). Oddly enough, just at the moments it made important inroads into the research agenda and became accepted by the mainstream, the enthusiasm for it waned. Many constructivists-as did mainstream scholars-moved from "grand theory" or even "meta-theory" toward "normal science," or experimented with other (eclectic) approaches, of which the turns to practices, to emotions, to new materialism, to the visual, and to the queer are some of the latest manifestations. In a way, constructivism was "successful," on the one hand, by introducing norms, norm-dynamics, and diffusion; the role of new actors in world politics; and the changing role of institutions into the debates, while losing, on the other hand, much of its critical potential. The latter survived only on the fringes-and in Europe more than in the United States. In IR, curiously, constructivism, which was rooted in various European traditions (philosophy, history, linguistics, social analysis), was originally introduced in Europe via the disciplinary discussions taking place in the United States. Yet, especially in its critical version, it has found a more conducive environment in Europe than in the United States.
    In the United States, soon after its emergence, constructivism became "mainstreamed" by having its analysis of norms reduced to "variable research." In such research, positive examples of for instance the spread of norms were included, but strangely empirical evidence of counterexamples of norm "deaths" (preventive strikes, unlawful combatants, drone strikes, extrajudicial killings) were not. The elective affinity of constructivism and humanitarianism seemed to have transformed the former into the Enlightenment project of "progress." Even Kant was finally pressed into the service of "liberalism" in the U.S. discussion, and his notion of the "practical interest of reason" morphed into the political project of an "end of history." This "slant" has prevented a serious conceptual engagement with the "history" of law and (inter-)national politics and the epistemological problems that are raised thereby. This bowdlerization of constructivism is further buttressed by the fact that in the "knowledge industry" none of the "leading" U.S. departments has a constructivist on board, ensuring thereby the narrowness of conceptual and methodological choices to which the future "professionals" are exposed. This article contextualizes constructivism and its emergence within a changing world and within the evolution of the discipline. The aim is not to provide a definition or a typology of constructivism, since such efforts go against the critical dimension of constructivism. An application of this critique on constructivism itself leads to a reflection on truth, knowledge, and the need for (re-)orientation.
    Source
    Oxford research encyclopedia of politics
  5. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.03
    0.031522032 = product of:
      0.078805074 = sum of:
        0.05623238 = weight(_text_:philosophy in 1154) [ClassicSimilarity], result of:
          0.05623238 = score(doc=1154,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.24390514 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.022572692 = weight(_text_:of in 1154) [ClassicSimilarity], result of:
          0.022572692 = score(doc=1154,freq=50.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34554482 = fieldWeight in 1154, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
      0.4 = coord(2/5)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Content
    A dissertation Presented to the Faculties of Roskilde University in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy. Vgl. unter: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.987 oder http://coitweb.uncc.edu/~ras/RS/Onto-Retrieval.pdf.
  6. Wei, W.; Ram, S.: Utilizing sozial bookmarking tag space for Web content discovery : a social network analysis approach (2010) 0.03
    0.030962978 = product of:
      0.07740744 = sum of:
        0.05623238 = weight(_text_:philosophy in 1) [ClassicSimilarity], result of:
          0.05623238 = score(doc=1,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.24390514 = fieldWeight in 1, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=1)
        0.021175062 = weight(_text_:of in 1) [ClassicSimilarity], result of:
          0.021175062 = score(doc=1,freq=44.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3241498 = fieldWeight in 1, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1)
      0.4 = coord(2/5)
    
    Abstract
    Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based on tags, as well as following the user-user connections formed in the social bookmarking community. However, the effectiveness of tag-based search is limited due to the lack of explicitly represented semantics in the tag space. In addition, social connections between users are underused for web content discovery because of the inadequate social functions. In this research, we propose a comprehensive framework to reorganize the flat tag space into a hierarchical faceted model. We also studied the structure and properties of various networks emerging from the tag space for the purpose of more efficient web content discovery. The major research approach used in this research is social network analysis (SNA), together with methodologies employed in design science research. The contribution of our research includes: (i) a faceted model to categorize social bookmarking tags; (ii) a relationship ontology to represent the semantics of relationships between tags; (iii) heuristics to reorganize the flat tag space into a hierarchical faceted model using analysis of tag-tag co-occurrence networks; (iv) an implemented prototype system as proof-of-concept to validate the feasibility of the reorganization approach; (v) a set of evaluations of the social functions of the current networking features of social bookmarking and a series of recommendations as to how to improve the social functions to facilitate web content discovery.
    Content
    A Dissertation Submitted to the Faculty of the COMMITTEE ON BUSINESS ADMINISTRATION In Partial Fulfillment of the Requirements For the Degree of DOCTOR OF PHILOSOPHY WITH A MAJOR IN MANAGEMENT In the Graduate College THE UNIVERSITY OF ARIZONA. Vgl.: http://hdl.handle.net/10150/195123. Vgl. auch: https://www.semanticscholar.org/paper/Utilizing-social-bookmarking-tag-space-for-web-a-Ram-Wei/da9e7e5ee771008b741af7176d3f0d67128d1dca.
  7. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.03
    0.03036432 = product of:
      0.0759108 = sum of:
        0.05623238 = weight(_text_:philosophy in 1178) [ClassicSimilarity], result of:
          0.05623238 = score(doc=1178,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.24390514 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=1178)
        0.019678416 = weight(_text_:of in 1178) [ClassicSimilarity], result of:
          0.019678416 = score(doc=1178,freq=38.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.30123898 = fieldWeight in 1178, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1178)
      0.4 = coord(2/5)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  8. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.03
    0.030303081 = product of:
      0.050505135 = sum of:
        0.004989027 = product of:
          0.024945134 = sum of:
            0.024945134 = weight(_text_:problem in 1197) [ClassicSimilarity], result of:
              0.024945134 = score(doc=1197,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.14068612 = fieldWeight in 1197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.2 = coord(1/5)
        0.01854536 = weight(_text_:of in 1197) [ClassicSimilarity], result of:
          0.01854536 = score(doc=1197,freq=60.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.28389403 = fieldWeight in 1197, product of:
              7.745967 = tf(freq=60.0), with freq of:
                60.0 = termFreq=60.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.026970748 = product of:
          0.053941496 = sum of:
            0.053941496 = weight(_text_:mind in 1197) [ClassicSimilarity], result of:
              0.053941496 = score(doc=1197,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.20688063 = fieldWeight in 1197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  9. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.03
    0.0292275 = product of:
      0.0487125 = sum of:
        0.0133040715 = product of:
          0.066520356 = sum of:
            0.066520356 = weight(_text_:problem in 2227) [ClassicSimilarity], result of:
              0.066520356 = score(doc=2227,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.375163 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.2 = coord(1/5)
        0.0127690425 = weight(_text_:of in 2227) [ClassicSimilarity], result of:
          0.0127690425 = score(doc=2227,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19546966 = fieldWeight in 2227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.045278773 = score(doc=2227,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
  10. Guerrini, M.: Cataloguing based on bibliographic axiology (2010) 0.03
    0.028211588 = product of:
      0.07052897 = sum of:
        0.016587472 = weight(_text_:of in 2624) [ClassicSimilarity], result of:
          0.016587472 = score(doc=2624,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 2624, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2624)
        0.053941496 = product of:
          0.10788299 = sum of:
            0.10788299 = weight(_text_:mind in 2624) [ClassicSimilarity], result of:
              0.10788299 = score(doc=2624,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.41376126 = fieldWeight in 2624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The article presents the work of Elaine Svenonius The Intellectual Foundation of Information Organization, translated in Italian and published by Le Lettere of Florence, within the series Pinakes, with the title Il fondamento intellettuale dell'organizzazione dell'informazione. The Intellectual Foundation of Information Organization defines the theoretical aspects of library science, its philosophical basics and principles, the purposes that must be kept in mind, abstracting from the technology used in a library. The book deals with information organization and bibliographic universe, in particular using the bibliographic entities defined in FRBR, at first. Then, it analyzes all the specific languages by which works and subjects are treated. This work, already acknowledged as a classic, organizes, synthesizes and make easily understood the whole complex of knowledge, practices and procedures developed in the last 150 years.
  11. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.03
    0.026983522 = product of:
      0.044972535 = sum of:
        0.0133040715 = product of:
          0.066520356 = sum of:
            0.066520356 = weight(_text_:problem in 7411) [ClassicSimilarity], result of:
              0.066520356 = score(doc=7411,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.375163 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.2 = coord(1/5)
        0.009029076 = weight(_text_:of in 7411) [ClassicSimilarity], result of:
          0.009029076 = score(doc=7411,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.13821793 = fieldWeight in 7411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7411)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.045278773 = score(doc=7411,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  12. Shirky, C.: Ontology is overrated : categories, links, and tags (2005) 0.03
    0.02579991 = product of:
      0.06449977 = sum of:
        0.019548526 = weight(_text_:of in 1265) [ClassicSimilarity], result of:
          0.019548526 = score(doc=1265,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 1265, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1265)
        0.04495125 = product of:
          0.0899025 = sum of:
            0.0899025 = weight(_text_:mind in 1265) [ClassicSimilarity], result of:
              0.0899025 = score(doc=1265,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.34480107 = fieldWeight in 1265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1265)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Today I want to talk about categorization, and I want to convince you that a lot of what we think we know about categorization is wrong. In particular, I want to convince you that many of the ways we're attempting to apply categorization to the electronic world are actually a bad fit, because we've adopted habits of mind that are left over from earlier strategies. I also want to convince you that what we're seeing when we see the Web is actually a radical break with previous categorization strategies, rather than an extension of them. The second part of the talk is more speculative, because it is often the case that old systems get broken before people know what's going to take their place. (Anyone watching the music industry can see this at work today.) That's what I think is happening with categorization. What I think is coming instead are much more organic ways of organizing information than our current categorization schemes allow, based on two units -- the link, which can point to anything, and the tag, which is a way of attaching labels to links. The strategy of tagging -- free-form labeling, without regard to categorical constraints -- seems like a recipe for disaster, but as the Web has shown us, you can extract a surprising amount of value from big messy data sets.
    Footnote
    This piece is based on two talks I gave in the spring of 2005 -- one at the O'Reilly ETech conference in March, entitled "Ontology Is Overrated", and one at the IMCExpo in April entitled "Folksonomies & Tags: The rise of user-developed classification." The written version is a heavily edited concatenation of those two talks.
  13. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.02
    0.024784494 = product of:
      0.061961234 = sum of:
        0.017665926 = weight(_text_:of in 1936) [ClassicSimilarity], result of:
          0.017665926 = score(doc=1936,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 1936, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1936)
        0.044295307 = product of:
          0.088590614 = sum of:
            0.088590614 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.088590614 = score(doc=1936,freq=10.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  14. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.02
    0.022275176 = product of:
      0.037125293 = sum of:
        0.0066520358 = product of:
          0.033260178 = sum of:
            0.033260178 = weight(_text_:problem in 1163) [ClassicSimilarity], result of:
              0.033260178 = score(doc=1163,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.2 = coord(1/5)
        0.019153563 = weight(_text_:of in 1163) [ClassicSimilarity], result of:
          0.019153563 = score(doc=1163,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 1163, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1163)
        0.011319693 = product of:
          0.022639386 = sum of:
            0.022639386 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.022639386 = score(doc=1163,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  15. Atran, S.: Basic conceptual domains (1989) 0.02
    0.021576598 = product of:
      0.10788299 = sum of:
        0.10788299 = product of:
          0.21576598 = sum of:
            0.21576598 = weight(_text_:mind in 478) [ClassicSimilarity], result of:
              0.21576598 = score(doc=478,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.8275225 = fieldWeight in 478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.09375 = fieldNorm(doc=478)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Mind and language. 4(1989) no.1/2, S.7-16
  16. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.021245057 = product of:
      0.05311264 = sum of:
        0.019153563 = weight(_text_:of in 100) [ClassicSimilarity], result of:
          0.019153563 = score(doc=100,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=100)
        0.033959076 = product of:
          0.06791815 = sum of:
            0.06791815 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.06791815 = score(doc=100,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 9.2007 15:41:14
  17. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.020523002 = product of:
      0.051307507 = sum of:
        0.011286346 = weight(_text_:of in 3925) [ClassicSimilarity], result of:
          0.011286346 = score(doc=3925,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.17277241 = fieldWeight in 3925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
        0.04002116 = product of:
          0.08004232 = sum of:
            0.08004232 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.08004232 = score(doc=3925,freq=4.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 7.2006 15:22:28
  18. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.02
    0.02037361 = product of:
      0.050934028 = sum of:
        0.0149730295 = weight(_text_:of in 1198) [ClassicSimilarity], result of:
          0.0149730295 = score(doc=1198,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2292085 = fieldWeight in 1198, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
        0.035961 = product of:
          0.071922 = sum of:
            0.071922 = weight(_text_:mind in 1198) [ClassicSimilarity], result of:
              0.071922 = score(doc=1198,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.27584085 = fieldWeight in 1198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1198)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.
  19. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.02034877 = product of:
      0.050871924 = sum of:
        0.022572692 = weight(_text_:of in 5865) [ClassicSimilarity], result of:
          0.022572692 = score(doc=5865,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34554482 = fieldWeight in 5865, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.056598466 = score(doc=5865,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  20. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.02
    0.019001076 = product of:
      0.04750269 = sum of:
        0.013543615 = weight(_text_:of in 8959) [ClassicSimilarity], result of:
          0.013543615 = score(doc=8959,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.20732689 = fieldWeight in 8959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=8959)
        0.033959076 = product of:
          0.06791815 = sum of:
            0.06791815 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.06791815 = score(doc=8959,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Everything users need to maximize the powers of Linux is included in this convenient, inexpensive package
    Date
    16. 7.2002 20:22:55

Types

  • a 453
  • p 22
  • r 18
  • s 16
  • n 14
  • x 12
  • b 6
  • i 6
  • m 6
  • More… Less…

Themes

Classifications