Search (47 results, page 3 of 3)

  • × theme_ss:"Multimedia"
  • × type_ss:"a"
  1. Benitez, A.B.; Zhong, D.; Chang, S.-F.: Enabling MPEG-7 structural and semantic descriptions in retrieval applications (2007) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 518) [ClassicSimilarity], result of:
              0.03267146 = score(doc=518,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 518, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=518)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The MPEG-7 standard supports the description of both the structure and the semantics of multimedia; however, the generation and consumption of MPEG-7 structural and semantic descriptions are outside the scope of the standard. This article presents two research prototype systems that demonstrate the generation and consumption of MPEG-7 structural and semantic descriptions in retrieval applications. The active system for MPEG-4 video object simulation (AMOS) is a video object segmentation and retrieval system that segments, tracks, and models objects in videos (e.g., person, car) as a set of regions with corresponding visual features and spatiotemporal relations. The region-based model provides an effective base for similarity retrieval of video objects. The second system, the Intelligent Multimedia Knowledge Application (IMKA), uses the novel MediaNet framework for representing semantic and perceptual information about the world using multimedia. MediaNet knowledge bases can be constructed automatically from annotated collections of multimedia data and used to enhance the retrieval of multimedia.
  2. Watters, C.R.; Shepherd, M.A.; Burkowski, F.J.: Electronic news delivery project (1998) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 444) [ClassicSimilarity], result of:
              0.027226217 = score(doc=444,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=444)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    News is information about recent events of general interest, especially as currently reportes by newspapers, periodicals, radio or television. News is the quintessential multimedia data. While newspaper editors (human and/or algorithmic) may still define the core content of electronic news, new communication technologies will enable the integration of news from a wide variety of sources and provide access to supplemental material from enormous archives of electronic news data (text, photos, and video) in digital libraries as well as the continual streams of newly created data. The goal of electronic news delivery within this context is, however, distiguishable from both news news groups and document retrieval. Electronic news promises to deliver to the reader an edited collage of recent events from wide domains in a manner that is both comprehensive and personalized. As part of a long-term research project into the design of future news delivery systems, we have developed an overall architecture and several prototypes. These prototypes are presented in the article, along with a discussion of issues related to the presentation metaphor and to the functionality of electronic news delivery services. A prototype was demonstrated at the 1995 G-7 Economic Summit in Halifax, Canada, integrating newspaper text and photographs with television news video clips across an ATM network
  3. Tjondronegoro, D.; Spink, A.; Jansen, B.J.: ¬A study and comparison of multimedia Web searching : 1997-2006 (2009) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 3090) [ClassicSimilarity], result of:
              0.027226217 = score(doc=3090,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Searching for multimedia is an important activity for users of Web search engines. Studying user's interactions with Web search engine multimedia buttons, including image, audio, and video, is important for the development of multimedia Web search systems. This article provides results from a Weblog analysis study of multimedia Web searching by Dogpile users in 2006. The study analyzes the (a) duration, size, and structure of Web search queries and sessions; (b) user demographics; (c) most popular multimedia Web searching terms; and (d) use of advanced Web search techniques including Boolean and natural language. The current study findings are compared with results from previous multimedia Web searching studies. The key findings are: (a) Since 1997, image search consistently is the dominant media type searched followed by audio and video; (b) multimedia search duration is still short (>50% of searching episodes are <1 min), using few search terms; (c) many multimedia searches are for information about people, especially in audio search; and (d) multimedia search has begun to shift from entertainment to other categories such as medical, sports, and technology (based on the most repeated terms). Implications for design of Web multimedia search engines are discussed.
  4. Villa, R.; Jose, J.M.: ¬A study of awareness in multimedia search (2012) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 2743) [ClassicSimilarity], result of:
              0.027226217 = score(doc=2743,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 2743, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2743)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Awareness of another's activity is an important aspect of facilitating collaboration between users, enabling an "understanding of the activities of others" (Dourish & Bellotti, 1992). In this paper we investigate the role of awareness and its effect on search performance and behaviour in collaborative multimedia retrieval. We focus on the scenario where two users are searching at the same time on the same task, and via an interface, can see the activity of the other user. The main research question asks: does awareness of another searcher aid a user when carrying out a multimedia search session? To encourage awareness, an experimental study was designed where two users were asked to compete to find as many relevant video shots as possible under different awareness conditions. These were individual search (no awareness), Mutual awareness (where both users could see the other's search screen), and unbalanced awareness (where one user is able to see the other's screen, but not vice-versa). Twelve pairs of users were recruited, and the four worst performing TRECVID 2006 search topics were used as search tasks, under four different awareness conditions. We present the results of this study, followed by a discussion of the implications for multimedia information retrieval systems.
  5. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 3958) [ClassicSimilarity], result of:
              0.027226217 = score(doc=3958,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 3958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  6. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 5732) [ClassicSimilarity], result of:
              0.027226217 = score(doc=5732,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 5732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  7. Iyengar, S.S.: Visual based retrieval systems and Web mining (2001) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 6520) [ClassicSimilarity], result of:
              0.021780973 = score(doc=6520,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 6520, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6520)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Languages

  • e 43
  • d 2
  • f 2
  • More… Less…