Complete Playlist of Unsupervised Machine Learning https://www.youtube.com/playlist?list=PLfQLfkzgFi7azUjaXuU0jTqg03kD-ZbUz

Today's recommended systems will sometimes need to pick a handful of items to recommend. From a catalog of thousands or millions or 10s of millions or even more items. How do you do this efficiently computationally, let's take a look. Here's in your network we've been using to make predictions about how a user might rate an item. Today a large movie streaming site may have thousands of movies or a system that is trying to decide what ad to show. May have a catalog of millions of ads to choose from. Or a music streaming sites may have 10s of millions of songs to choose from. And large online shopping sites can have millions or even 10s of millions of products to choose from. When a user shows up on your website, they have some feature Xu. But if you need to take thousands of millions of items to feed through this neural network in order to compute in the product. To figure out which products you should recommend, then having to run neural network inference. Thousands of millions of times every time a user shows up on your website becomes computational e infeasible. Many law scale recommended systems are implemented as two steps which are called the retrieval and ranking steps. The idea is during the retrieval step will generate a large list of plausible item candidates. That tries to cover a lot of possible things you might recommend to the user and it's okay during the retrieval step. If you include a lot of items that the user is not likely to like and then during the ranking step will fine tune and pick the best items to recommend to the user. So here's an example, during the retrieval step we might do something like. For each of the last 10 movies that the user has watched find the 10 most similar movies. So this means for example if a user has watched the movie I with vector VIM you can find the movies hey with vector VKM that is similar to that. And as you saw in the last video finding the similar movies, the given movie can be pre computed. So having pre computed the most similar movies to give a movie, you can just pull up the results using a look up table. This would give you an initial set of maybe somewhat plausible movies to recommend to user that just showed up on your website. Additionally you might decide to add to it for whatever are the most viewed three genres of the user. Say that the user has watched a lot of romance movies and a lot of comedy movies and a lot of historical dramas. Then we would add to the list of possible item candidates the top 10 movies in each of these three genres. And then maybe we will also add to this list the top 20 movies in the country of the user. So this retrieval step can be done very quickly and you may end up with a list of 100 or maybe 100s of plausible movies. To recommend to the user and hopefully this list will recommend some good options. But it's also okay if it includes some options that the user won't like at all. The goal of the retrieval step is to ensure broad coverage to have enough movies at least have many good ones in there. Finally, we would then take all the items we retrieve during the retrieval step and combine them into a list. Removing two cookers and removing items that the user has already washed or that the user has already purchased and that you may not want to recommend to them again. The second step of this is then the ranking step. During the ranking step you will take the list retrieved during the retrieval step. So this may be just hundreds of possible movies and rank them using the learned model. And what that means is you will feed the user feature vector and the movie feature actor into this neural network. And for each of the user movie pairs compute the predicted rating. And based on this, you now have all of the se 100 plus movies, the ones that the user is most likely to give a high rating to. And then you can just display the rank list of items to the user depending on what you think the user will give. The highest rating to one additional optimization is that if you have computed VM. For all the movies in advance, then all you need to do is to do inference on this part of the neural network a single time to compute VU. And then take that VU they just computed for the user on your website right now. And take the inner product between VU and VM. For the movies that you have retrieved during the retrieval step. So this computation can be done relatively quickly.

Subscribe to our channel for more computer science related tutorials| https://www.youtube.com/@learnwithcoursera