Learn to code from scratch with the latest and greatest tools and techniques.
Enroll NowFrom Photoshop to After Effects, learn professional creative tools from the experts.
Enroll NowSnag unlimited access to 1,000+ courses for life — now just $99 with this deal!
View DealVariational autoencoders and GANs have been 2 of the most interesting developments in deep learning and machine learning recently.
Yann LeCun, a deep learning pioneer, has said that the most important development in recent years has been adversarial training, referring to GANs.
GAN stands for generative adversarial network, where 2 neural networks compete with each other.
What is unsupervised learning?
Unsupervised learning means we’re not trying to map input data to targets, we’re just trying to learn the structure of that input data.
Once we’ve learned that structure, we can do some pretty cool things.
One example is generating poetry - we’ve done examples of this in the past.
But poetry is a very specific thing, how about writing in general?
If we can learn the structure of language, we can generate any kind of text. In fact, big companies are putting in lots of money to research how the news can be written by machines.
But what if we go back to poetry and take away the words?
Well then we get art, in general.
By learning the structure of art, we can create more art.
How about art as sound?
If we learn the structure of music, we can create new music.
Imagine the top 40 hits you hear on the radio are songs written by robots rather than humans.
The possibilities are endless!
You might be wondering, "how is this course different from the first unsupervised deep learning course?"
In this first course, we still tried to learn the structure of data, but the reasons were different.
We wanted to learn the structure of data in order to improve supervised training, which we demonstrated was possible.
In this new course, we want to learn the structure of data in order to produce more stuff that resembles the original data.
This by itself is really cool, but we'll also be incorporating ideas from Bayesian Machine Learning, Reinforcement Learning, and Game Theory. That makes it even cooler!
Thanks for reading and I’ll see you in class. =)
NOTES:
All the code for this course can be downloaded from my github:
https://github.com/lazyprogrammer/machine_learning_examples
In the directory: unsupervised_class3
Make sure you always "git pull" so you have the latest version!
HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:
TIPS (for getting through the course):
USEFUL COURSE ORDERING:
I am a data scientist, big data engineer, and full stack software engineer.
I received my masters degree in computer engineering with a specialization in machine learning and pattern recognition.
Experience includes online advertising and digital media as both a data scientist (optimizing click and conversion rates) and big data engineer (building data processing pipelines). Some big data technologies I frequently use are Hadoop, Pig, Hive, MapReduce, and Spark.
I've created deep learning models to predict click-through rate and user behavior, as well as for image and signal processing and modeling text.
My work in recommendation systems has applied Reinforcement Learning and Collaborative Filtering, and we validated the results using A/B testing.
I have taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Hunter College, and The New School.
Multiple businesses have benefitted from my web programming expertise. I do all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies I've used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases I've used MySQL, Postgres, Redis, MongoDB, and more.