deep_learning

Paper Summary: biologically plausible networks

Biologically-Plausible Learning Algorithms Can Scale to Large Datasets by Will Xiao[1], Honglin Chen[2], Qianli Liao[2] and Tomaso Poggio[2] [1] Department of Molecular and Cellular Biology, Harvard University [2] Center for Brains, Minds, and Machines, MIT Background Consider a layer in a feedforward neural network. Let xi denote the input to the i th neuron in the layer and yj the output of the j th neuron. Let W denote the feedforward weight matrix and Wij the connection between input xi and output y.

KotlinSyft

KotlinSyft makes it easy for you to **train and execute PySyft models on Android devices**. This allows you to utilize training data located directly on the device itself, bypassing the need to send a user's data to a central server. [The first use case for our library is here](https://blog.openmined.org/apheris-openmined-pytorch-announcement)

Adversarial Corruption

This work aims to derive a non-trivial breakdown point for an algorithm for training a single hiddenlayer Neural Network. In pursuit of this goal, we propose two algorithms for training a network with ReLU activations. The first approach utilizes the partitioning property of the ReLU function while the second approach utilizes the convexity of the activation function.

Word Problem Solver

There is a large disparity in the access to proper education at the school level. We seek to build a problem aware online tutoring system for students to help improve the situation. In this project, we have made a complete solution generator which, given a word problem on arithmetic at the levels of classes 6-8, extracts relevant information from the question and solves the problem in a step-by-step manner. Currently, we are handling only basic speed, time and distance problems.