Perceptron

A perceptron is the simplest possible neural network — just one artificial neuron. It’s the building block that all modern deep learning is built upon.

The Big Idea

A perceptron takes multiple inputs, weighs how important each one is, and makes a yes/no decision.

Think of it like deciding whether to go to a party:

FactorYour InputWeight (Importance)
Friends going?Yes (1)Very important (0.7)
Good music?No (0)Somewhat important (0.4)
Close by?Yes (1)Less important (0.2)

The perceptron multiplies each input by its weight, adds them up, and decides: go or don’t go.

How It Works

graph LR
    X1[Input 1] -->|weight 1| S((Sum))
    X2[Input 2] -->|weight 2| S
    X3[Input 3] -->|weight 3| S
    S --> A{Threshold}
    A -->|Above| Y1[Yes / 1]
    A -->|Below| Y0[No / 0]

Step by Step

  1. Multiply each input by its weight
  2. Add all the weighted inputs together
  3. Add the bias (a threshold adjustment)
  4. Decide: if the sum is above 0, output 1 (yes); otherwise output 0 (no)

Party Example

(Friends going × 0.7) + (Good music × 0.4) + (Close by × 0.2) + bias
= (1 × 0.7) + (0 × 0.4) + (1 × 0.2) + (-0.5)
= 0.7 + 0 + 0.2 - 0.5
= 0.4

0.4 > 0 → Output: 1 (Go to the party!)

The Math (Simple Version)

output={1if (inputs×weights)+bias>00otherwise\text{output} = \begin{cases} 1 & \text{if } \sum(inputs \times weights) + bias > 0 \\ 0 & \text{otherwise} \end{cases}

Or written out:

output={1if (x1w1+x2w2+...+bias)>00otherwiseoutput = \begin{cases} 1 & \text{if } (x_1 \cdot w_1 + x_2 \cdot w_2 + ... + bias) > 0 \\ 0 & \text{otherwise} \end{cases}

What’s the Bias?

The bias shifts the decision boundary. It’s like setting your baseline mood:

  • Positive bias: More likely to say yes (optimistic)
  • Negative bias: More likely to say no (cautious)

Without bias, the perceptron can only draw decision lines through the origin.

How It Learns

A perceptron learns by adjusting its weights based on mistakes:

  1. Make a prediction
  2. Check if it was right or wrong
  3. If wrong, nudge the weights in the right direction
  4. Repeat with more examples

Learning Rule

If prediction was wrong:
    new weight = old weight + (learning rate × error × input)

The learning rate controls how big each adjustment is — too big and it overshoots, too small and it learns slowly.

What Can a Perceptron Do?

A single perceptron can solve problems where you can draw a straight line to separate the answers:

✅ Can Solve: AND Gate

Input AInput BOutput
000
010
100
111

You can draw a line separating the 0s from the 1.

✅ Can Solve: OR Gate

Input AInput BOutput
000
011
101
111

❌ Cannot Solve: XOR Gate

Input AInput BOutput
000
011
101
110

No single straight line can separate these — you need multiple perceptrons (a neural network).

Inspired by Biology

The perceptron was inspired by how real neurons work:

Biological NeuronPerceptron
Dendrites receive signalsInputs
Synapses have different strengthsWeights
Cell body sums signalsWeighted sum
Fires if threshold reachedActivation function
Axon sends outputOutput

Historical Significance

  • 1958: Frank Rosenblatt invented the perceptron
  • 1969: Minsky & Papert showed its limitations (XOR problem)
  • This led to the first “AI winter” — reduced funding and interest
  • 1980s: Multi-layer perceptrons (neural networks) solved the XOR problem and revived the field

From Perceptron to Neural Networks

Modern neural networks are just many perceptrons connected together:

  • Single perceptron → Simple yes/no decisions
  • Multi-layer perceptron (MLP) → Can learn complex patterns
  • Deep neural networks → Many layers, can learn almost anything

The key insight: stack enough simple decision-makers together, and you can solve incredibly complex problems.

Key Takeaways

  1. A perceptron is a single artificial neuron
  2. It multiplies inputs by weights, sums them, and makes a binary decision
  3. It learns by adjusting weights when it makes mistakes
  4. One perceptron can only solve “linearly separable” problems
  5. Neural networks are just many perceptrons working together

See Also

-
-