Search (35 results, page 1 of 2)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z.: Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things (2017) 0.01
    0.014032597 = product of:
      0.04209779 = sum of:
        0.04209779 = product of:
          0.12629336 = sum of:
            0.12629336 = weight(_text_:network in 306) [ClassicSimilarity], result of:
              0.12629336 = score(doc=306,freq=14.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.6508985 = fieldWeight in 306, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Internet of Things (IoT) brings the third development wave of the global information industry which makes users, network and perception devices cooperate more closely. However, if IoT has security problems, it may cause a variety of damage and even threaten human lives and properties. To improve the abilities of monitoring, providing emergency response and predicting the development trend of IoT security, a new paradigm called network security situation awareness (NSSA) is proposed. However, it is limited by its ability to mine and evaluate security situation elements from multi-source heterogeneous network security information. To solve this problem, this paper proposes an IoT network security situation awareness model using situation reasoning method based on semantic ontology and user-defined rules. Ontology technology can provide a unified and formalized description to solve the problem of semantic heterogeneity in the IoT security domain. In this paper, four key sub-domains are proposed to reflect an IoT security situation: context, attack, vulnerability and network flow. Further, user-defined rules can compensate for the limited description ability of ontology, and hence can enhance the reasoning ability of our proposed ontology model. The examples in real IoT scenarios show that the ability of the network security situation awareness that adopts our situation reasoning method is more comprehensive and more powerful reasoning abilities than the traditional NSSA methods. [http://ieeexplore.ieee.org/abstract/document/7999187/]
  2. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.01
    0.012861087 = product of:
      0.03858326 = sum of:
        0.03858326 = product of:
          0.11574978 = sum of:
            0.11574978 = weight(_text_:network in 5798) [ClassicSimilarity], result of:
              0.11574978 = score(doc=5798,freq=6.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.59655833 = fieldWeight in 5798, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents the idea and our work of extracting and reassembling a genealogical network automatically from a collection of biographies. The network can be used as a tool for network analysis of historical persons. The data has been published as Linked Data and as an interactive online service as part of the in-use data service and semantic portal BiographySampo - Finnish Biographies on the Semantic Web.
  3. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.01
    0.01102379 = product of:
      0.03307137 = sum of:
        0.03307137 = product of:
          0.0992141 = sum of:
            0.0992141 = weight(_text_:network in 1557) [ClassicSimilarity], result of:
              0.0992141 = score(doc=1557,freq=6.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.51133573 = fieldWeight in 1557, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1557)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  4. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.01
    0.010607645 = product of:
      0.031822935 = sum of:
        0.031822935 = product of:
          0.095468804 = sum of:
            0.095468804 = weight(_text_:network in 3106) [ClassicSimilarity], result of:
              0.095468804 = score(doc=3106,freq=8.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.492033 = fieldWeight in 3106, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3106)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
  5. Gutierres Castanha, R.C.; Hilário, C.M.; Araújo, P.C. de; Cabrini Grácio, M.C.: Citation analysis of North American Symposium on Knowledge Organization (NASKO) Proceedings (2007-2015) (2017) 0.01
    0.007500738 = product of:
      0.022502214 = sum of:
        0.022502214 = product of:
          0.06750664 = sum of:
            0.06750664 = weight(_text_:network in 3863) [ClassicSimilarity], result of:
              0.06750664 = score(doc=3863,freq=4.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.34791988 = fieldWeight in 3863, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3863)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge Organization (KO) theoretical foundations are still being developed in a continuous process of epistemological, theoretical and methodological consolidation. The remarkable growth of scientific records has stimulated the analysis of this production and the creation of instruments to evaluate the behavior of science became indispensable. We propose the Domain Analysis of KO in North America through the citation analysis of North American Symposium on Knowledge Organization (NASKO) proceedings (2007 - 2015). We present the citation, co-citation and bibliographic coupling analysis to visualize and recognize the researchers that influence the scholarly communication in this domain. The most prolific authors through NASKO conferences are Smiraglia, Tennis, Green, Dousa, Grant Campbell, Pimentel, Beak, La Barre, Kipp and Fox. Regarding their theoretical references, Hjørland, Olson, Smiraglia, and Ranganathan are the authors who most inspired the event's studies. The co-citation network shows the highest frequency is between Olson and Mai, followed by Hjørland and Mai and Beghtol and Mai, consolidating Mai and Hjørland as the central authors of the theoretical references in NASKO. The strongest theoretical proximity in author bibliographic coupling network occurs between Fox and Tennis, Dousa and Tennis, Tennis and Smiraglia, Dousa and Beak, and Pimentel and Tennis, highlighting Tennis as central author, that interconnects the others in relation to KO theoretical references in NASKO. The North American chapter has demonstrated a strong scientific production as well as a high level of concern with theoretical and epistemological questions, gathering researchers from different countries, universities and knowledge areas.
  6. Szostak, R.; Smiraglia, R.P.: Comparative approaches to interdisciplinary KOSs : use cases of converting UDC to BCC (2017) 0.01
    0.007425352 = product of:
      0.022276055 = sum of:
        0.022276055 = product of:
          0.06682816 = sum of:
            0.06682816 = weight(_text_:network in 3874) [ClassicSimilarity], result of:
              0.06682816 = score(doc=3874,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.3444231 = fieldWeight in 3874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3874)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We take a small sample of works and compare how these are classified within both the Universal Decimal Classification and the Basic concepts Classification. We examine notational length, expressivity, network effects, and the number of subject strings. One key finding is that BCC typically synthesizes many more terms than UDC in classifying a particular document - but the length of classificatory notations is roughly equivalent for the two KOSs. BCC captures documents with fewer subject strings (generally one) but these are more complex.
  7. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.01
    0.006558894 = product of:
      0.019676682 = sum of:
        0.019676682 = product of:
          0.059030045 = sum of:
            0.059030045 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.059030045 = score(doc=5865,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2017 12:51:57
  8. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.01
    0.0064929742 = product of:
      0.019478923 = sum of:
        0.019478923 = product of:
          0.058436766 = sum of:
            0.058436766 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.058436766 = score(doc=3450,freq=4.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  9. Karpathy, A.; Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions (2015) 0.01
    0.0063645868 = product of:
      0.01909376 = sum of:
        0.01909376 = product of:
          0.057281278 = sum of:
            0.057281278 = weight(_text_:network in 1868) [ClassicSimilarity], result of:
              0.057281278 = score(doc=1868,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.29521978 = fieldWeight in 1868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1868)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations.
  10. Kiros, R.; Salakhutdinov, R.; Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models (2014) 0.01
    0.0063645868 = product of:
      0.01909376 = sum of:
        0.01909376 = product of:
          0.057281278 = sum of:
            0.057281278 = weight(_text_:network in 1871) [ClassicSimilarity], result of:
              0.057281278 = score(doc=1871,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.29521978 = fieldWeight in 1871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1871)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison.
  11. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.01
    0.0063645868 = product of:
      0.01909376 = sum of:
        0.01909376 = product of:
          0.057281278 = sum of:
            0.057281278 = weight(_text_:network in 3655) [ClassicSimilarity], result of:
              0.057281278 = score(doc=3655,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.29521978 = fieldWeight in 3655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3655)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
  12. Wu, Y.; Bai, R.: ¬An event relationship model for knowledge organization and visualization (2017) 0.01
    0.0063645868 = product of:
      0.01909376 = sum of:
        0.01909376 = product of:
          0.057281278 = sum of:
            0.057281278 = weight(_text_:network in 3867) [ClassicSimilarity], result of:
              0.057281278 = score(doc=3867,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.29521978 = fieldWeight in 3867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3867)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    An event is a specific occurrence involving participants, which is a typed, n-ary association of entities or other events, each identified as a participant in a specific semantic role in the event (Pyysalo et al. 2012; Linguistic Data Consortium 2005). Event types may vary across domains. Representing relationships between events can facilitate the understanding of knowledge in complex systems (such as economic systems, human body, social systems). In the simplest form, an event can be represented as Entity A <Relation> Entity B. This paper evaluates several knowledge organization and visualization models and tools, such as concept maps (Cmap), topic maps (Ontopia), network analysis models (Gephi), and ontology (Protégé), then proposes an event relationship model that aims to integrate the strengths of these models, and can represent complex knowledge expressed in events and their relationships.
  13. Wei, W.; Ram, S.: Utilizing sozial bookmarking tag space for Web content discovery : a social network analysis approach (2010) 0.01
    0.0060005905 = product of:
      0.01800177 = sum of:
        0.01800177 = product of:
          0.05400531 = sum of:
            0.05400531 = weight(_text_:network in 1) [ClassicSimilarity], result of:
              0.05400531 = score(doc=1,freq=4.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2783359 = fieldWeight in 1, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based on tags, as well as following the user-user connections formed in the social bookmarking community. However, the effectiveness of tag-based search is limited due to the lack of explicitly represented semantics in the tag space. In addition, social connections between users are underused for web content discovery because of the inadequate social functions. In this research, we propose a comprehensive framework to reorganize the flat tag space into a hierarchical faceted model. We also studied the structure and properties of various networks emerging from the tag space for the purpose of more efficient web content discovery. The major research approach used in this research is social network analysis (SNA), together with methodologies employed in design science research. The contribution of our research includes: (i) a faceted model to categorize social bookmarking tags; (ii) a relationship ontology to represent the semantics of relationships between tags; (iii) heuristics to reorganize the flat tag space into a hierarchical faceted model using analysis of tag-tag co-occurrence networks; (iv) an implemented prototype system as proof-of-concept to validate the feasibility of the reorganization approach; (v) a set of evaluations of the social functions of the current networking features of social bookmarking and a series of recommendations as to how to improve the social functions to facilitate web content discovery.
  14. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.0055654063 = product of:
      0.016696218 = sum of:
        0.016696218 = product of:
          0.050088655 = sum of:
            0.050088655 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.050088655 = score(doc=1967,freq=4.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  15. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.0053038225 = product of:
      0.015911467 = sum of:
        0.015911467 = product of:
          0.047734402 = sum of:
            0.047734402 = weight(_text_:network in 1873) [ClassicSimilarity], result of:
              0.047734402 = score(doc=1873,freq=2.0), product of:
                0.19402927 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.043569047 = queryNorm
                0.2460165 = fieldWeight in 1873, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1873)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  16. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.01
    0.0052947453 = product of:
      0.015884236 = sum of:
        0.015884236 = product of:
          0.047652703 = sum of:
            0.047652703 = weight(_text_:29 in 54) [ClassicSimilarity], result of:
              0.047652703 = score(doc=54,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.31092256 = fieldWeight in 54, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    10.12.2020 9:29:12
  17. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.01
    0.005247115 = product of:
      0.015741345 = sum of:
        0.015741345 = product of:
          0.047224034 = sum of:
            0.047224034 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.047224034 = score(doc=1149,freq=2.0), product of:
                0.15257138 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043569047 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    17.12.2013 11:02:22
  18. Bates, M.J.: ¬The nature of browsing (2019) 0.00
    0.0046329014 = product of:
      0.013898704 = sum of:
        0.013898704 = product of:
          0.041696113 = sum of:
            0.041696113 = weight(_text_:29 in 2265) [ClassicSimilarity], result of:
              0.041696113 = score(doc=2265,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.27205724 = fieldWeight in 2265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2265)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    25. 6.2019 11:13:29
  19. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.00
    0.0046329014 = product of:
      0.013898704 = sum of:
        0.013898704 = product of:
          0.041696113 = sum of:
            0.041696113 = weight(_text_:29 in 156) [ClassicSimilarity], result of:
              0.041696113 = score(doc=156,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.27205724 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 5.2012 10:17:08
  20. Godfrey, B.; Johnson, J.: ¬The geospatial metadata manager's toolbox : three techniques for maintaining records (2015) 0.00
    0.0046329014 = product of:
      0.013898704 = sum of:
        0.013898704 = product of:
          0.041696113 = sum of:
            0.041696113 = weight(_text_:29 in 2275) [ClassicSimilarity], result of:
              0.041696113 = score(doc=2275,freq=2.0), product of:
                0.15326229 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043569047 = queryNorm
                0.27205724 = fieldWeight in 2275, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2275)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]