Machine Learning

  • by

  • Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network
    by /u/Mic_Pie on January 27, 2020 at 5:46 am

    submitted by /u/Mic_Pie [link] [comments]

  • [Research] An Explicit Local and Global Representation Disentanglement Framework with Applications in Deep Clustering and Unsupervised Object Detection
    by /u/51616 on January 27, 2020 at 5:38 am

    I just submitted a paper about disentangled representation learning. It’s an extension to VAE models. Hope you find it useful. If you have any question, don’t hesitate to ask in the comment 🙂 Abstract: There are several benefits from learning disentangled representations, including interpretability, compositionality and generalisation to new tasks. Disentanglement could be done by imposing an inductive bias based on prior knowledge. Different priors make different structural assumptions about the representations. For instance, priors with granularity can lead to representation to describe data at different scales. A notable example is in the visual domain where there are multiple scales of the variation in the data. Hence, learning representation at different scales will be useful for different tasks. In this work, we propose a framework, called SPLIT, which allows us to disentangle local and global information into two separate sets of latent variables within the variational autoencoder (VAE) framework. Our framework adds an extra generative assumption to the VAE by requiring a subset of the latent variables to generate an auxiliary set of observable data. This set of data, which contains only local information, can be obtained via a transformation of the original data that removes global information. Three different flavours of VAE with different generative assumptions were examined in our experiments. We show that the framework can be effectively used to disentangle local and global information within these models. Benefits of the framework are demonstrated through multiple downstream representation learning problems. The framework can unlock the potential of these VAE models in the tasks of style transfer, deep clustering and unsupervised object detection with a simple modification to existing VAE models. Finally, we review cognitive neuroscience literature regarding disentanglement in human visual perception. The code for our experiments can be found at this https URL. https://arxiv.org/abs/2001.08957 submitted by /u/51616 [link] [comments]

  • best way to increase priority of accepted answers [project][P]
    by /u/legit0ne on January 27, 2020 at 5:12 am

    i am working on a dataset which contains various questions and their answers. i have to build a recommendation system to give list of appropriate answers suggestion to the question asked by the user . i am using nltk and gensim to find documents similarity among questions and then recommending their answers which is working fine. but i also want to increase the priority of an answers according to whether or not that answer is being accepted by the users for a particular question. any type of suggestion would be helpful. submitted by /u/legit0ne [link] [comments]

  • [D] untrained Deep Prior but for discrete data?
    by /u/tsauri on January 27, 2020 at 4:56 am

    Deep Prior is a randomly-initialized NN can do unsupervised learning for continuous data (image, audio, video).with tasks such image inpainting, audio denoising, audio separation, sparse map completion, etc. Is there such thing as untrained prior for discrete data such as text? Can we get something like “finetune a GPT-2” but with randomly-initialized NN? submitted by /u/tsauri [link] [comments]

  • [D] Benchmark Environments for testing RL Algorithms
    by /u/theneuralbeing on January 27, 2020 at 4:34 am

    Hi, I am starting to do research in Deep RL and wanted to know if there are any benchmark environments to test if an RL algorithm is performing well or not? Or just environment specific leaderboards exist like StarCraft etc? submitted by /u/theneuralbeing [link] [comments]