Version: v1.6.0 Engine (Bundle v2.0.0)
Last Updated: 2026-04-15
Status: Maintained
Goal: Create and run your first GNN model in 15 minutes, no prior Active Inference knowledge required.
A simple navigation agent that learns to find a goal location in a 2x2 grid world.
[Start] [ ]
[ ] [Goal]
- Basic programming knowledge (any language)
- Python 3.11+ installed
- 15 minutes of focused time
💡 No Active Inference background needed! This tutorial explains concepts as we go.
# Clone the repository
git clone https://github.com/ActiveInferenceInstitute/GeneralizedNotationNotation.git
cd GeneralizedNotationNotation
# Install UV package manager (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies using UV (recommended)
uv sync
# Test the installation
python src/main.py --help# Create a folder for your first model
mkdir my_first_gnn_model
cd my_first_gnn_modelSimple version: A mathematical framework where agents:
- Have beliefs about the world (hidden states)
- Make observations about what they can see
- Take actions to achieve their preferences
- States: What the agent needs to track (e.g., position)
- Observations: What the agent can see (e.g., visual input)
- Actions: What the agent can do (e.g., move)
- Preferences: What the agent wants (e.g., reach goal)
- States: 4 positions (Top-Left, Top-Right, Bottom-Left, Bottom-Right)
- Observations: Current position (can see where it is)
- Actions: 4 movements (Up, Down, Left, Right)
- Preference: Be at the goal (Bottom-Right)
Create a file called grid_agent.gnn:
## GNNVersionAndFlags
GNN v1
## ModelName
Simple Grid Navigation Agent v1.0
## ModelAnnotation
A 2x2 grid navigation agent that learns to reach a goal.
The agent can observe its current position and can move in 4 directions.
Goal is to reach the bottom-right corner.
## StateSpaceBlock
# Hidden State Factor: Agent's position
s_f0[4,1,type=int] # Position (0:TopLeft, 1:TopRight, 2:BottomLeft, 3:BottomRight/Goal)
# Observation: What the agent sees (its current position)
o_m0[4,1,type=int] # Observed position (same as true position)
# Control: Agent's actions
pi_c0[4,type=float] # Policy over movement actions
u_c0[1,type=int] # Chosen action (0:Up, 1:Down, 2:Left, 3:Right)
# Model matrices
A_m0[4,4,type=float] # Likelihood: P(observation | position)
B_f0[4,4,4,type=float] # Transition: P(next_position | current_position, action)
C_m0[4,type=float] # Preferences over observations
D_f0[4,type=float] # Prior beliefs about starting position
# Expected Free Energy and time
G[1,type=float] # Expected Free Energy for action selection
t[1,type=int] # Time step
## Connections
# Prior influences initial state
(D_f0) -> (s_f0)
# Position determines what agent observes
(s_f0) -> (A_m0)
(A_m0) -> (o_m0)
# Position and action determine next position
(s_f0, u_c0) -> (B_f0)
(B_f0) -> s_f0_next
# Preferences and expected outcomes influence action selection
(C_m0, A_m0, B_f0, s_f0) > G
G > pi_c0
(pi_c0) -> u_c0
## InitialParameterization
# A_m0: Agent can perfectly observe its position (identity matrix)
A_m0={
((1.0, 0.0, 0.0, 0.0), # If at TopLeft(0), observe TopLeft
(0.0, 1.0, 0.0, 0.0), # If at TopRight(1), observe TopRight
(0.0, 0.0, 1.0, 0.0), # If at BottomLeft(2), observe BottomLeft
(0.0, 0.0, 0.0, 1.0)) # If at BottomRight(3), observe BottomRight
}
# B_f0: Movement transitions [next_pos, current_pos, action]
# Actions: 0:Up, 1:Down, 2:Left, 3:Right
B_f0={
# next_position = TopLeft(0)
(((1.0, 0.0, 1.0, 0.0), # From positions 0,1,2,3 with action Up(0)
(1.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Down(1)
(1.0, 1.0, 1.0, 1.0), # From positions 0,1,2,3 with action Left(2)
(0.0, 0.0, 0.0, 0.0))), # From positions 0,1,2,3 with action Right(3)
# next_position = TopRight(1)
(((0.0, 1.0, 0.0, 1.0), # From positions 0,1,2,3 with action Up(0)
(0.0, 1.0, 0.0, 0.0), # From positions 0,1,2,3 with action Down(1)
(0.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Left(2)
(1.0, 1.0, 1.0, 1.0))), # From positions 0,1,2,3 with action Right(3)
# next_position = BottomLeft(2)
(((0.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Up(0)
(0.0, 0.0, 1.0, 0.0), # From positions 0,1,2,3 with action Down(1)
(0.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Left(2)
(0.0, 0.0, 0.0, 0.0))), # From positions 0,1,2,3 with action Right(3)
# next_position = BottomRight(3) - GOAL
(((0.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Up(0)
(0.0, 0.0, 0.0, 1.0), # From positions 0,1,2,3 with action Down(1)
(0.0, 0.0, 0.0, 0.0), # From positions 0,1,2,3 with action Left(2)
(0.0, 0.0, 0.0, 0.0))) # From positions 0,1,2,3 with action Right(3)
}
# C_m0: Preferences (higher values = more preferred)
C_m0={(-1.0, -1.0, -1.0, 2.0)} # Strongly prefer goal position (BottomRight)
# D_f0: Start at TopLeft with certainty
D_f0={(1.0, 0.0, 0.0, 0.0)}
## Equations
# Standard Active Inference equations for policy selection:
# G(π) = E_q[ln q(o,s|π) - ln P(o,s|π) - ln C(o)]
# P(π) = softmax(-G(π))
## Time
Dynamic
DiscreteTime=t
ModelTimeHorizon=5
## ActInfOntologyAnnotation
s_f0=HiddenStatePosition
o_m0=ObservationPosition
pi_c0=PolicyMovement
u_c0=ActionMovement
A_m0=LikelihoodMatrixPosition
B_f0=TransitionMatrixMovement
C_m0=PreferenceVector
D_f0=PriorBelief
G=ExpectedFreeEnergy
t=TimeStep
## Footer
Simple Grid Navigation Agent v1.0
## Signature
Creator: GNN Tutorial
Date: 2024
Status: Tutorial Example
Save this as grid_agent.gnn in your my_first_gnn_model folder.
Check if your model is correct:
# Run the GNN type checker (Step 5)
python src/5_type_checker.py --target-dir my_first_gnn_model/ --verbose
# If successful, you should see:
# ✅ grid_agent.gnn: Valid GNN model
# 📊 Resource estimation: [details]If you see errors: Check the Common Errors Guide or compare with the template above.
For more information on the type checker, see src/type_checker/AGENTS.md.
Convert your GNN model to executable Python code:
# Generate PyMDP code (Steps 3, 11, 12)
python src/main.py --only-steps "3,11,12" --target-dir my_first_gnn_model/ --output-dir output/my_first_model/ --verbose
# This creates several outputs:
# - output/11_render_output/ (executable code for PyMDP, RxInfer, etc.)
# - output/8_visualization_output/ (model diagrams)
# - output/7_export_output/ (JSON, XML formats)For more details on code generation, see:
- src/render/AGENTS.md: Code rendering module documentation
- src/execute/AGENTS.md: Execution module documentation
# Navigate to rendered output
cd output/11_render_output/
# Run the PyMDP simulation
python grid_agent_pymdp.py
# You should see the agent's behavior:
# Time 0: Position=TopLeft, Action=Right
# Time 1: Position=TopRight, Action=Down
# Time 2: Position=BottomRight, Action=Stay (GOAL REACHED!)You've just:
- ✅ Written your first GNN model
- ✅ Validated it with the type checker
- ✅ Generated executable code
- ✅ Run a working Active Inference agent
Your agent:
- Started with belief it's at TopLeft
- Observed its true position
- Planned actions to reach the goal (BottomRight)
- Selected actions based on expected free energy minimization
- Learned to navigate optimally
- Change the goal: Modify
C_m0to prefer TopRight instead - Add uncertainty: Make observations noisy by modifying
A_m0 - Bigger world: Extend to a 3x3 grid (requires updating all matrices)
- Understand the math: Read Active Inference basics
- Try examples: Explore more complex models
- Different domains: Navigation → Perception → Decision making
- Advanced features: Multi-agent, learning, hierarchical models
- Pipeline architecture: See src/AGENTS.md for complete module documentation
- Pipeline safety: Read src/README.md for architecture patterns
- Start with the template: Use
templates/basic_gnn_template.md - Model your domain: What states, observations, actions make sense?
- Get help: Check FAQ and community discussions
- Process with pipeline: Use
src/main.pyto run complete workflow
| Concept | What It Does | In Our Example |
|---|---|---|
Hidden States (s_f0) |
What the agent tracks internally | 4 grid positions |
Observations (o_m0) |
What the agent can perceive | Current position |
Actions (u_c0) |
What the agent can do | 4 movement directions |
Likelihood (A_m0) |
How states relate to observations | Perfect position sensing |
Transitions (B_f0) |
How actions change states | Movement rules |
Preferences (C_m0) |
What the agent wants | Reach bottom-right |
Expected Free Energy (G) |
How the agent chooses actions | Minimize surprise, maximize reward |
- Check section headers: Use
## StateSpaceBlocknot## StateSpace - Check variable names: Use
s_f0nots f0(underscores, not spaces)
- Ensure matrix sizes match variable definitions
A_m0[4,4]means 4 observations × 4 states
- Each column in B_f0 must sum to 1.0
- Each column in A_m0 must sum to 1.0
- Check
C_m0: Higher values should be at preferred states - Check
B_f0: Ensure movement logic is correct - Try
ModelTimeHorizon=10for longer planning
- Pipeline Documentation:
- src/AGENTS.md: Complete module registry
- src/README.md: Pipeline architecture and safety
- Documentation: Full GNN guide and GNN Overview
- Examples: Model gallery
- Help:
- Community: GitHub Discussions
🎓 You're now a GNN practitioner! Ready to model complex cognitive agents and contribute to Active Inference research.
Time taken: ~15 minutes
Achievement unlocked: First working GNN model ✨