Understanding how neural networks learn through activation functions, forward propagation and backpropagation.

In Part 1, you learned what a neural network is made of. You saw neurons, weights, biases and layers working together to process information. That structure is the blueprint.
Now comes the important question.
How does this system actually learn?
A neural network does not wake up knowing how to recognize spam, faces or language. At the beginning, it is a guess. What turns those guesses into accurate predictions is a learning process built on feedback, error correction and repetition.
In this part, we will explore that learning loop. You will see how activation functions shape decisions, how data moves forward through the network and how backpropagation allows the system to fix its own mistakes. Once you understand this cycle, neural networks stop feeling mysterious and start feeling logical.
Weights and biases calculate the data. The activation function decides what to do with it. This component helps the network capture complex patterns.
Without this step, the entire network fails to learn nuance. All the layers collapse into a single linear step. The activation function allows the system to model curves and edges in the data instead of just straight lines.
Common Activation Types
You will usually see one of three standard options inside a network.
Forward Propagation is simply the data flowing from start to finish.
The Process in Three Steps
The Result:
For sorting categories (like cats vs. dogs), a Softmax function turns numbers into probabilities. For scoring (like predicting house prices), a single neuron outputs a raw number.
Example: Spam Filter
To spot spam, you feed the network word counts and links. The layers crunch the numbers. The final output gives you a probability: 0.87 Spam. You pick the winner.

“Learning” in AI is actually just error correction. The goal is to adjust the weights and biases until the system’s prediction matches the correct answer.
The network runs a constant cycle of five steps to improve itself: The system runs a continuous five-step loop to correct itself:

Building a working neural network is a step-by-step process that ensures the model learns effectively and avoids bias.
Collect Data: You gather examples that are clearly labeled. Example: Emails labeled “spam” or “not spam.”
Split Data: You divide the data into three separate groups: the Training Set, the Validation Set, and the Test Set.
Normalize Features: You scale the numbers in the data so that the training process is stable and faster.
Train the Model: You run many passes over the Training Set to minimize the loss (error).
Validate: You check the model’s performance on the Validation Set. This step is crucial for catching overfitting, when the model memorizes the training data but fails on new data.
Test: You report the accuracy using the final Test Set. This provides an unbiased estimate of how the model will perform in the real world.
Real-Life Application:
When a support team wants quicker ticket triaging, they label past tickets by category. A small network learns from those labels and then routes new tickets to the right queue instantly, with the workflow above guiding the whole setup.
Neural networks do not memorize raw data. They learn internal features that allow them to make smart decisions.
These features are hierarchical. Simple features combine to create a complex understanding.
The Layered Vision
The learning process is broken down by the depth of the layers:
This hierarchical design is why deep networks (those with many layers) can master complex tasks like computer vision and language translation.
You now understand how a neural network learns. You have seen how data moves forward through the layers, how predictions are made and how mistakes are measured and corrected through backpropagation. This learning loop is what turns a static structure into a system that improves with experience.
Once training is complete, the network is no longer guessing. It has captured patterns from the data and can apply them to new situations with speed and consistency. At this stage, the core mechanics are in place, but one important question remains.
What kind of neural network should be used for different problems, and how do we control how these systems learn in real-world settings? Part 3 will answer that question. We will look at common neural network types, the tasks they are best suited for and the training controls that shape their behavior.
This final piece connects the learning process to the real AI systems you interact with every day.