Trained as an aerospace engineer at one of the best institutes in India - Indian Institue of Technology, Madras.
Graduated with a minor in industrial engineering and a thesis on "coordinated control of satellites".
2018-2019
Transition to the world of coding
Started Master's at Stanford and decided to explore a bit.
First machine learning project presented at
Baylan 2019.
Decided to take more courses, ended up doing everything AI related and loving it.
Summer 2019
Upgrade to research
Joined
Stanford machine learning group and worked under
prof Andrew Ng. Took a deep dive into semi-supervised learning:
minimize hand labeling. Developed methods to get 50% accuracy from 10% labeling on benchmark datasets.
2019-2020
Phase Two Expansion
Mastered other domains: big data, parallel computing
To be
continued
...
Experience
I'm smart and quick to adapt
Built a comprehensive database of solar energy production from
satellite imagery. This will be integrated with energy sources like wind
to create a unified database of global energy production.
Explored semi-supervised learning methods to handle large unlabeled
datasets with minimal hand labeling. Achieved 250% higher accuracy compared to purely supervised methods by utilizing unlabeled
data on benchmark datasets.
presentation
Received an international pre-placement offer for junior analyst position
Performed market entry research of health care industry on a South-East Asian country for a Japan based MNC.
Assisted in long term strategy development and performed competitive analysis for artificial intelligence division
Modeled Particle Image Velocimetry (PIV) and Background Oriented Schlieren (BOS) techniques in Matlab
Performed analysis on missile store separation drop tests for military aircrafts
Analyzed the airfield around weapon bay cavities and studied the impact of a new design surface
Projects
All the projects I've enjoyed doing
Animated videos from image
Generative models, Machine learning
Autonomous delivery bot
Robotic software development
Texting Application
Software developement, Socket Programming
Contact Me
Would be happy to connect and discuss opportunites
You could also email me at: neethur@stanford.edu
Video generation from single image
Generative models, Machine learning
Input image
Stylized image
Animated video
Date: March-May, 2019
Course: Convolutional Neural Networks for visual recognition
Starting from a single image of a person, we created a 3 second animated video in which the main subject moves.
This is similar to 'live photo' feature in iPhones.
We trained our models on 100 'Cartwheel' videos from HMDB database.
The model consists of a generative and style block.
GANs and convolutional LSTMs were used for generative block and
fast neural style transfer for stylization.
Details:
Generative block:
GANs have 2 neural networks - generator
and discriminator. The generator takes input and generates sample output;
the discriminator tries to differentiate the generated
outputs from real data. Together they train each other.
Generator architecture
has 3D downsampling convolutions with bachnorm followed by
transpose 3d convolutions.
Discriminator consists of 3D convolutional layers with batchnorm
and leakyReLU between them.
The output of the
generator is fed back into it 31 times to get the 32 frames
of the video.
Similar to the generator with an
extra LSTM layer after the last downsampling.
The final layer is a sigmoid is multiplied
by 255 to get the image.
The model is used as a next frame predictor. During training time takes in 32 images and outputs 32 images of the same size.
During test time a single image is inputted and the output is fed back in to to get the next image.
Style block
Takes two images — content and style image — and blends them together so the output image looks like the content image,
but “painted” in the style of the style image.
Takes around half an hour to generate a stylized image for a content image.
Uses perceptual loss functions for training transfer network.
I used vgg-16 model to train transfer network.
Instance normalization was replaced with conditional
instance normalization to train the transfer network
with multiple styles at the same time, instead of training
different networks for each style.
Training takes a couple of hours and stylized image generation takes seconds.