Complete Playlist of Unsupervised Machine Learning https://www.youtube.com/playlist?list=PLfQLfkzgFi7azUjaXuU0jTqg03kD-ZbUz

In this video, we'll take a look at how you can use TensorFlow to make the collaborative filtering algorithm. You might be used to thinking of TensorFlow as a tool for building neural networks. And it is. It's a great tool for building neural networks. And it turns out that TensorFlow can also be very hopeful for building other types of learning algorithms as well. Like the collaborative filtering algorithm. One of the reasons I like using TensorFlow for talks like these is that for many applications in order to implement gradient descent, you need to find the derivatives of the cost function, but TensorFlow can automatically figure out for you what are the derivatives of the cost function. All you have to do is implement the cost function and without needing to know any calculus, without needing to take derivatives yourself, you can get TensorFlow with just a few lines of code to compute that derivative term, that can be used to optimize the cost function. Let's take a look at how all this works. You might remember this diagram here on the right from course one. This is exactly the diagram that we had looked at when we talked about optimizing w. When we were working through our first linear regression example. And at that time we had set b=0. And so the model was just predicting f(x)=w.x. And we wanted to find the value of w that minimizes the cost function J. So the way we were doing that was via a gradient descent update, which looked like this, where w gets repeatedly updated as w minus the learning rate alpha times the derivative term. If you are updating b as well, this is the expression you will use. But if you said b=0, you just forgo the second update and you keep on performing this gradient descent update until convergence. Sometimes computing this derivative or partial derivative term can be difficult. And it turns out that TensorFlow can help with that. Let's see how. I'm going to use a very simple cost function J=(wx-1) squared. So wx is our simplified f w of x and y is equal to 1. And so this would be the cost function if we had f(x) equals wx,y equals 1 for the one training example that we have, and if we were not optimizing this respect to b. So the gradient descent algorithm will repeat until convergence this update over here. It turns out that if you implement the cost function J over here, TensorFlow can automatically compute for you this derivative term and thereby get gradient descent to work. I'm going to set x=1.0, y=1.0, and the learning rate alpha to be equal to 0.01. And let's run gradient dissent for 30 iterations. So in this code will still do for iter in range iterations, so for 30 iterations. You can also use a more powerful optimization algorithm like the adam optimizer. The data set you use in the practice lab is a real data set comprising actual movies rated by actual people. This is the movie lens dataset and it's due to Harper and Konstan. And I hope you enjoy running this algorithm on a real data set of movies, and ratings and see for yourself the results that this algorithm can get. So that's it. That's how you can implement the collaborative filtering algorithm in TensorFlow. If you're wondering why do we have to do it this way? Why couldn't we use a dense layer and then model compiler and model fit? The reason we couldn't use that old recipe is, the collateral filtering algorithm and cost function, it doesn't neatly fit into the dense layer or the other standard neural network layer types of TensorFlow. That's why we had to implement it this other way where we would implement the cost function ourselves. But then use TensorFlow's tools for automatic differentiation, also called Auto Diff. And use TensorFlow's implementation of the adam optimization algorithm to let it do a lot of the work for us of optimizing the cost function. If the model you have is a sequence of dense neural network layers or other types of layers supported by TensorFlow, and the old implementation recipe of model compound model fit works. But even when it isn't, these tools TensorFlow give you a very effective way to implement other learning algorithms as well. And so I hope you enjoy playing more with the collateral filtering exercise in this week's practice lab. And looks like there's a lot of code and lots of syntax, don't worry about it. Make sure you have what you need to complete that exercise successfully. And in the next video, I'd like to also move on to discuss more of the nuances of collateral filtering and specifically the question of how do you find related items, given one movie, whether other movies similar to this one. Let's go on to the next video

Subscribe to our channel for more computer science related tutorials| https://www.youtube.com/@learnwithcoursera