Search (339 results, page 1 of 17)

  • × theme_ss:"Internet"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.07
    0.06985216 = product of:
      0.11642026 = sum of:
        0.044399645 = weight(_text_:index in 97) [ClassicSimilarity], result of:
          0.044399645 = score(doc=97,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.23897146 = fieldWeight in 97, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
        0.06523886 = weight(_text_:system in 97) [ClassicSimilarity], result of:
          0.06523886 = score(doc=97,freq=32.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 97, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
        0.0067817504 = product of:
          0.02034525 = sum of:
            0.02034525 = weight(_text_:29 in 97) [ClassicSimilarity], result of:
              0.02034525 = score(doc=97,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.13602862 = fieldWeight in 97, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=97)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
    Source
    Knowledge organization. 29(2002) no.2, S.61-77
  2. Cox, A.M.: Flickr: a case study of Web2.0 (2008) 0.05
    0.053977143 = product of:
      0.0899619 = sum of:
        0.057061244 = weight(_text_:context in 2569) [ClassicSimilarity], result of:
          0.057061244 = score(doc=2569,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 2569, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2569)
        0.023299592 = weight(_text_:system in 2569) [ClassicSimilarity], result of:
          0.023299592 = score(doc=2569,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 2569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2569)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 2569) [ClassicSimilarity], result of:
              0.028803186 = score(doc=2569,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 2569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2569)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The "photosharing" site Flickr is one of the most commonly cited examples used to define Web2.0. This paper aims to explore where Flickr's real novelty lies, examining its functionality and its place in the world of amateur photography. Several optimistic views of the impact of Flickr such as its facilitation of citizen journalism, "vernacular creativity" and in learning as an "affinity space" are evaluated. Design/methodology/approach - The paper draws on a wide range of sources including published interviews with its developers, user opinions expressed in forums, telephone interviews and content analysis of user profiles and activity. Findings - Flickr's development path passes from an innovative social game to a relatively familiar model of a web site, itself developed through intense user participation but later stabilising with the reassertion of a commercial relationship to the membership. The broader context of the impact of Flickr is examined by looking at the institutions of amateur photography and particularly the code of pictorialism promoted by the clubs and industry during the twentieth century. The nature of Flickr as a benign space is premised on the way the democratic potential of photography is controlled by such institutions. The limits of optimistic claims about Flickr are identified in the way that the system is designed to satisfy commercial purposes, continuing digital divides in access and the low interactivity and criticality on Flickr. Originality/value - Flickr is an interesting source of change, but can only be understood in the perspective of long-term development of the hobby and wider social processes. By setting Flickr in such a broad context, its significance and that of Web2.0 more generally can be fully assessed.
    Date
    30.12.2008 19:38:22
  3. Wenyin, L.; Chen, Z.; Li, M.; Zhang, H.: ¬A media agent for automatically builiding a personalized semantic index of Web media objects (2001) 0.05
    0.052788865 = product of:
      0.13197216 = sum of:
        0.12034631 = weight(_text_:index in 6522) [ClassicSimilarity], result of:
          0.12034631 = score(doc=6522,freq=10.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.64773786 = fieldWeight in 6522, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=6522)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 6522) [ClassicSimilarity], result of:
              0.034877572 = score(doc=6522,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 6522, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6522)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    A novel idea of media agent is briefly presented, which can automatically build a personalized semantic index of Web media objects for each particular user. Because the Web is a rich source of multimedia data and the text content on the Web pages is usually semantically related to those media objects on the same pages, the media agent can automatically collect the URLs and related text, and then build the index of the multimedia data, on behalf of the user whenever and wherever she accesses these multimedia data or their container Web pages. Moreover, the media agent can also use an off-line crawler to build the index for those multimedia objects that are relevant to the user's favorites but have not accessed by the user yet. When the user wants to find these multimedia data once again, the semantic index facilitates text-based search for her.
    Date
    29. 9.2001 17:37:16
  4. Chen, H.-M.; Cooper, M.D.: Stochastic modeling of usage patterns in a Web-based information system (2002) 0.04
    0.044585004 = product of:
      0.11146251 = sum of:
        0.062146567 = weight(_text_:index in 577) [ClassicSimilarity], result of:
          0.062146567 = score(doc=577,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3344904 = fieldWeight in 577, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=577)
        0.049315944 = weight(_text_:system in 577) [ClassicSimilarity], result of:
          0.049315944 = score(doc=577,freq=14.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.36826712 = fieldWeight in 577, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=577)
      0.4 = coord(2/5)
    
    Abstract
    Users move from one state (or task) to another in an information system's labyrinth as they try to accomplish their work, and the amount of time they spend in each state varies. This article uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns (both in rates and in probabilities) in a Web-based information system. The methodology was demonstrated with 126,925 search sessions drawn from the transaction logs of the University of California's MELVYL® library catalog system (www.melvyLucop.edu). First, user sessions were categorized into six groups based on their similar use of the system. Second, by using a three-layer hierarchical taxonomy of the system Web pages, user sessions in each usage group were transformed into a sequence of states. All the usage groups but one have third-order sequential dependency in state transitions. The sole exception has fourth-order sequential dependency. The transition rates as well as transition probabilities of the semi-Markov model provide a background for interpreting user behavior probabilistically, at various levels of detail. Finally, the differences in derived usage patterns between usage groups were tested statistically. The test results showed that different groups have distinct patterns of system use. Knowledge of the extent of sequential dependency is beneficial because it allows one to predict a user's next move in a search space based on the past moves that have been made. It can also be used to help customize the design of the user interface to the system to facilitate interaction. The group CL6 labeled "knowledgeable and sophisticated usage" and the group CL7 labeled "unsophisticated usage" both had third-order sequential dependency and had the same most-frequently occurring search pattern: screen display, record display, screen display, and record display. The group CL8 called "highly interactive use with good search results" had fourth-order sequential dependency, and its most frequently occurring pattern was the same as CL6 and CL7 with one more screen display action added. The group CL13, called "known-item searching" had third-order sequential dependency, and its most frequently occurring pattern was index access, search with retrievals, screen display, and record display. Group CL14 called "help intensive searching," and CL18 called "relatively unsuccessful" both had thirdorder sequential dependency, and for both groups the most frequently occurring pattern was index access, search without retrievals, index access, and again, search without retrievals.
  5. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.04
    0.039080456 = product of:
      0.09770114 = sum of:
        0.03588033 = weight(_text_:index in 3966) [ClassicSimilarity], result of:
          0.03588033 = score(doc=3966,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.1931181 = fieldWeight in 3966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
        0.06182081 = weight(_text_:system in 3966) [ClassicSimilarity], result of:
          0.06182081 = score(doc=3966,freq=22.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.46164727 = fieldWeight in 3966, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
      0.4 = coord(2/5)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  6. Andricik, M.: Metasearch engine for Austrian research information (2002) 0.03
    0.030541632 = product of:
      0.07635408 = sum of:
        0.06279058 = weight(_text_:index in 3600) [ClassicSimilarity], result of:
          0.06279058 = score(doc=3600,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.33795667 = fieldWeight in 3600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3600)
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 3600) [ClassicSimilarity], result of:
              0.0406905 = score(doc=3600,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 3600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3600)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Majority of Austrian research relevant information available an the Web these days can be indexed by web full-text search engines. But there are still several sources of valuable information, which cannot be indexed directly. One of effective ways of getting this information to end-users is using metasearch technique. For better understanding it is important to say that metasearch engine does not use its own index. It collects search results provided by other search engines, and builds a common hit list for end users. Our prototype provides access to five sources of research relevant information available an the Austrian web.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  7. Yang, C.C.; Chung, A.: ¬A personal agent for Chinese financial news on the Web (2002) 0.03
    0.027260004 = product of:
      0.068150006 = sum of:
        0.044850416 = weight(_text_:index in 205) [ClassicSimilarity], result of:
          0.044850416 = score(doc=205,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=205)
        0.023299592 = weight(_text_:system in 205) [ClassicSimilarity], result of:
          0.023299592 = score(doc=205,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=205)
      0.4 = coord(2/5)
    
    Abstract
    As the Web has become a major channel of information dissemination, many newspapers expand their services by providing electronic versions of news information on the Web. However, most investors find it difficult to search for the financial information of interest from the huge Web information space-information overloading problem. In this article, we present a personal agent that utilizes user profiles and user relevance feedback to search for the Chinese Web financial news articles on behalf of users. A Chinese indexing component is developed to index the continuously fetched Chinese financial news articles. User profiles capture the basic knowledge of user preferences based on the sources of news articles, the regions of the news reported, categories of industries related, the listed companies, and user-specified keywords. User feedback captures the semantics of the user rated news articles. The search engine ranks the top 20 news articles that users are most interested in and report to the user daily or on demand. Experiments are conducted to measure the performance of the agents based on the inputs from user profiles and user feedback. It shows that simply using the user profiles does not increase the precision of the retrieval. However, user relevance feedback helps to increase the performance of the retrieval as the user interact with the system until it reaches the optimal performance. Combining both user profiles and user relevance feedback produces the best performance
  8. Hunt, S.: ¬The cataloguing of internet resources (2001) 0.03
    0.025116233 = product of:
      0.12558116 = sum of:
        0.12558116 = weight(_text_:index in 4159) [ClassicSimilarity], result of:
          0.12558116 = score(doc=4159,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.67591333 = fieldWeight in 4159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.109375 = fieldNorm(doc=4159)
      0.2 = coord(1/5)
    
    Source
    Catalogue and index. 2001, no.141, S.1-5
  9. James, J.: Digital preparedness versus the digital divide : a confusion of means and ends (2008) 0.03
    0.025116233 = product of:
      0.12558116 = sum of:
        0.12558116 = weight(_text_:index in 1616) [ClassicSimilarity], result of:
          0.12558116 = score(doc=1616,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.67591333 = fieldWeight in 1616, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1616)
      0.2 = coord(1/5)
    
    Abstract
    Composite indexes of digital preparedness, such as the Networked Readiness Index (NRI) and the Digital Opportunity Index (DOI), have caused a great deal of confusion in the more general literature on the digital divide. For whereas one would expect preparedness to be an input into the utilization of information technologies (the digital divide), the recent indicators add inputs and outputs, or means and ends. I suggest instead two separate indexes for means and ends, which can be more usefully related to one another in terms of productivity (one index divided by the other), or as dependent and independent variables (one index in a functional relationship to the other).
  10. Cordeiro, M.I.; Slavic, A.: Data models for knowledge organization tools : evolution and perspectives (2003) 0.02
    0.024017572 = product of:
      0.06004393 = sum of:
        0.04841807 = weight(_text_:context in 2632) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2632,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2632)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 2632) [ClassicSimilarity], result of:
              0.034877572 = score(doc=2632,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 2632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2632)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This paper focuses on the need for knowledge organization (KO) tools, such as library classifications, thesauri and subject heading systems, to be fully disclosed and available in the open network environment. The authors look at the place and value of traditional library knowledge organization tools in relation to the technical environment and expectations of the Semantic Web. Future requirements in this context are explored, stressing the need for KO systems to support semantic interoperability. In order to be fully shareable KO tools need to be reframed and reshaped in terms of conceptual and data models. The authors suggest that some useful approaches to this already exist in methodological and technical developments within the fields of ontology modelling and lexicographic and terminological data interchange.
    Date
    29. 8.2004 9:26:23
  11. Frandsen, T.F.; Wouters, P.: Turning working papers into journal articles : an exercise in microbibliometrics (2009) 0.02
    0.02397574 = product of:
      0.059939347 = sum of:
        0.04841807 = weight(_text_:context in 2757) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2757,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2757)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 2757) [ClassicSimilarity], result of:
              0.03456382 = score(doc=2757,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 2757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2757)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This article focuses on the process of scientific and scholarly communication. Data on open access publications on the Internet not only provides a supplement to the traditional citation indexes but also enables analysis of the microprocesses and daily practices that constitute scientific communication. This article focuses on a stage in the life cycle of scientific and scholarly information that precedes the publication of formal research articles in the scientific and scholarly literature. Binomial logistic regression models are used to analyse precise mechanisms at work in the transformation of a working paper (WP) into a journal article (JA) in the field of economics. The study unveils a fine-grained process of adapting WPs to their new context as JAs by deleting and adding literature references, which perhaps can be best captured by the term sculpting.
    Date
    22. 3.2009 18:59:25
  12. Bharat, K.: SearchPad : explicit capture of search context to support Web search (2000) 0.02
    0.0225951 = product of:
      0.1129755 = sum of:
        0.1129755 = weight(_text_:context in 3432) [ClassicSimilarity], result of:
          0.1129755 = score(doc=3432,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.64109284 = fieldWeight in 3432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.109375 = fieldNorm(doc=3432)
      0.2 = coord(1/5)
    
  13. Herrmann, C.: Partikulare Konkretion universal zugänglicher Information : Beobachtungen zur Konzeptionierung fachlicher Internet-Seiten am Beispiel der Theologie (2000) 0.02
    0.02160399 = product of:
      0.10801995 = sum of:
        0.10801995 = product of:
          0.16202992 = sum of:
            0.081381 = weight(_text_:29 in 4364) [ClassicSimilarity], result of:
              0.081381 = score(doc=4364,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.5441145 = fieldWeight in 4364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4364)
            0.08064892 = weight(_text_:22 in 4364) [ClassicSimilarity], result of:
              0.08064892 = score(doc=4364,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.5416616 = fieldWeight in 4364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4364)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    22. 1.2000 19:29:08
  14. Harms, I.; Schweibenz, W.: Usability engineering methods for the Web (2000) 0.02
    0.0215282 = product of:
      0.107641 = sum of:
        0.107641 = weight(_text_:index in 5482) [ClassicSimilarity], result of:
          0.107641 = score(doc=5482,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.5793543 = fieldWeight in 5482, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5482)
      0.2 = coord(1/5)
    
    Abstract
    The paper presents the results of a study on usability methods for evaluating Web sites. It summarizes the "Heuristics for Web Communications," and reports the practical experiences with these heuristics, contrasting them with the "Keevil Index" and combining them with user testing with thinking aloud. It concludes that working with the "Heuristics for Web Communications" takes more time and effort than working with the "Keevil Index," but produces more consistent results. The heuristics proved to be applicable both in heuristic evaluation and in combination with user testing.
    Content
    Der Beitrag präsentiert eine Studie über Evaluationsmethoden zur WebUsability. Er beschreibt die "Heuristics for Web Communications" und berichtet von den praktischen Erfahrungen mit den Heuristiken, die mit dem "Keevil Index" verglichen und mit Benutzertests mit lautem Denken kombiniert werden. Das Ergebnis zeigt, dass eine Evaluation mit den beschriebenen Heuristiken gegenüber dem "Keevil Index" mehr Zeit und Aufwand erfordert, aber konsistentere Ergebnisse bringt. Die Heuristiken haben sich sowohl in der experten-zentrierten Evaluation als auch in Kombination mit dem Benutzertest insgesamt als geeignete Evaluationsmethode erwiesen
  15. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.02
    0.020466631 = product of:
      0.05116658 = sum of:
        0.03954072 = weight(_text_:system in 1566) [ClassicSimilarity], result of:
          0.03954072 = score(doc=1566,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 1566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1566)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 1566) [ClassicSimilarity], result of:
              0.034877572 = score(doc=1566,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 1566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This study developed a specialized directory system using an automatic classification technique. Economics was selected as the subject field for the classification experiments with Web documents. The classification scheme of the directory follows the DDC, and subject terms representing each class number or subject category were selected from the DDC table to construct a representative term dictionary. In collecting and classifying the Web documents, various strategies were tested in order to find the optimal thresholds. In the classification experiments, Web documents in economics were classified into a total of 757 hierarchical subject categories built from the DDC scheme. The first and second experiments using the representative term dictionary resulted in relatively high precision ratios of 77 and 60%, respectively. The third experiment employing a machine learning-based k-nearest neighbours (kNN) classifier in a closed experimental setting achieved a precision ratio of 96%. This implies that it is possible to enhance the classification performance by applying a hybrid method combining a dictionary-based technique and a kNN classifier
    Source
    Journal of information science. 29(2003) no.2, S.117-126
  16. Hert, C.A.; Jacob, E.K.; Dawson, P.: ¬A usability assessment of online indexing structures in the networked environment (2000) 0.02
    0.020057717 = product of:
      0.100288585 = sum of:
        0.100288585 = weight(_text_:index in 5158) [ClassicSimilarity], result of:
          0.100288585 = score(doc=5158,freq=10.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.5397815 = fieldWeight in 5158, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5158)
      0.2 = coord(1/5)
    
    Abstract
    Usability of Web sites has become an increasingly important area of research as Web sites proliferate and problems with use are noted. Generally, aspects of Web sites that have been investigated focus on such areas as overall design and navigation. The exploratory study reported on here investigates one specific component of a Web site-the index structure. By employing index usability metrics developed by Liddy and Jörgensen (1993; Jörgensen & Liddy, 1996) and modified to accommodate a hypertext environment, the study compared the effectiveness and efficiency of 20 subjects who used one existing index (the A-Z index on the FedStats Web site at http://www.fedstats.gov) and three experimental variants to complete five researcher-generated tasks. User satisfaction with the indexes was also evaluated. The findings indicate that a hypertext index with multiple access points for each concept, all linked to the same resource, led to greater effectiveness and efficiency of retrieval on almost all measures. Satisfaction measures were more variable. The study offers insight into potential improvements in the design of Web-based indexes and provides preliminary assessment of the validity of the measures employed
  17. Sundar, S.S.; Knobloch-Westerwick, S.; Hastall, M.R.: News cues : information scent and cognitive heuristics (2007) 0.02
    0.01997978 = product of:
      0.049949452 = sum of:
        0.040348392 = weight(_text_:context in 143) [ClassicSimilarity], result of:
          0.040348392 = score(doc=143,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=143)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 143) [ClassicSimilarity], result of:
              0.028803186 = score(doc=143,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=143)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Google News and other newsbots have automated the process of news selection, providing Internet users with a virtually limitless array of news and public information dynamically culled from thousands of news organizations all over the world. In order to help users cope with the resultant overload of information, news leads are typically accompanied by three cues: (a) the name of the primary source from which the headline and lead were borrowed, (b) the time elapsed since the story broke, and (c) the number of related articles written about this story by other news organizations tracked by the newsbot. This article investigates the psychological significance of these cues by positing that the information scent transmitted by each cue triggers a distinct heuristic (mental shortcut) that tends to influence online users' perceptions of a given news item, with implications for their assessment of the item's relevance to their information needs and interests. A large 2 x 3 x 6 withinsubjects online experiment (N = 523) systematically varied two levels of the source credibility cue, three levels of the upload recency cue and six levels of the number-ofrelated-articles cue in an effort to investigate their effects upon perceived message credibility, newsworthiness, and likelihood of clicking on the news lead. Results showed evidence for source primacy effect, and some indication of a cue-cumulation effect when source credibility is low. Findings are discussed in the context of machine and bandwagon heuristics.
    Date
    7. 3.2007 16:22:24
  18. Cummings, J.; Johnson, R.: ¬The use and usability of SFX : context-sensitive reference linking (2003) 0.02
    0.019567933 = product of:
      0.09783966 = sum of:
        0.09783966 = weight(_text_:context in 4135) [ClassicSimilarity], result of:
          0.09783966 = score(doc=4135,freq=6.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.5552027 = fieldWeight in 4135, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4135)
      0.2 = coord(1/5)
    
    Abstract
    SFX is an XML-based product designed to inter-link electronic resources with other resources in context-sensitive manner. SFX was first developed at the University of Ghent by Herbert Van de Sompel and has been released as a commercial product by Ex Libris. Use statistics garnered from SFX's statistics module since the implementation in July of 2001 are discussed in the context of an academic research library environment. The results from usability testing conducted at Washington State University are reported. These usage statistics demonstrated a pattern of increasing use and exceptional use from FirstSearch databases.
  19. Ardö, A.; Godby, J.; Houghton, A.; Koch, T.; Reighart, R.; Thompson, R.; Vizine-Goetz, D.: Browsing engineering resources on the Web : a general knowledge organization scheme (Dewey) vs. a special scheme (EI) (2000) 0.02
    0.01864397 = product of:
      0.09321985 = sum of:
        0.09321985 = weight(_text_:index in 86) [ClassicSimilarity], result of:
          0.09321985 = score(doc=86,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.50173557 = fieldWeight in 86, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=86)
      0.2 = coord(1/5)
    
    Abstract
    Under the auspices of the Desire II project, researchers at NetLab and OCLC are providing searching and browsing of a test collection of engineering documents on the Web. The goal of the project is to explore simple methods of automatic classification to provide subject browsing of a robot-generated engineering index. At NetLab the documents are automatically classified and organized using an engineering-specific scheme, the Engineering Index (Ei) Thesaurus and Classification; at OCLC the Dewey Decimal Classification (DDC), a general knowledge organization scheme, is being used
    Object
    Engineering Index
  20. Sauer, D.: Alles schneller finden (2001) 0.02
    0.018641815 = product of:
      0.09320907 = sum of:
        0.09320907 = product of:
          0.1398136 = sum of:
            0.082207225 = weight(_text_:29 in 6835) [ClassicSimilarity], result of:
              0.082207225 = score(doc=6835,freq=4.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.5496386 = fieldWeight in 6835, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6835)
            0.057606373 = weight(_text_:22 in 6835) [ClassicSimilarity], result of:
              0.057606373 = score(doc=6835,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 6835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6835)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    1. 8.1997 14:03:29
    11.11.2001 17:25:22
    Source
    Com!online. 2001, H.12, S.24-29

Languages

  • d 199
  • e 136
  • hu 3
  • f 1
  • More… Less…