I prepared a “W11_VectorEmbeddings.ipynb” file containing the content of this week. The first two “parts” are presented in the videos. (but do note that there is more content that is only accessible in the Jupyter Notebook)
This week I had planned to speak a little about Language Models. I felt, however, that everything that I had to say was already said, and whatever was “left” was only going to be a lot of math, that I didn’t feel would be useful for you. So I changed my mind and created this content with some more exploration of “Distributional Semantics”. The goals are the following:
Because this class is so “unorthodox”, almost none of this will appear in the exam. There are only two things that I want you to know from this class (that may appear in the exam):
I know I said there would be exercises for this class. Unfortunately, I was not able to come up with ideas for questions. My plan for this class is to go through the notebook along with you.
(Also… do note that the notebook is slightly different from the video, because I made changes to it after I recorded the videos.)
That is all for this course. I hope the course was useful. Thank you for your participation. In the next week there will only be a Q&A + Feedback, and then in the other week we have the exam.
The techniques we explored this week are quite new, and still relatively “hot” in the NLP literature. If you are interested in them, then you should probably take a look at the Stanford course on Natural Language Processing with Deep Learning, which will talk about them in more details.