About Natural Language Processing with Probabilistic Models course
In the second year of the Natural Language Processing specialization, you will: a) Create a simple autocorrection algorithm using minimum edit distance and dynamic programming; b) Apply the Viterbi algorithm to part-of-speech (POS) extraction, which is very important for computational linguistics; c) Write a more advanced autocorrection algorithm using an N-gram model of language; and d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model.
By the end of this specialization, you will have developed NLP applications that perform question answering and sentiment analysis, created tools for language translation and text summarization, and even built a chatbot! This specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an instructor in artificial intelligence at Stanford University who also helped create the Deep Learning specialization. Lukasz Kaiser is a staff scientist at Google Brain and co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.