Skip to content

Commit 2b8a602

Browse files
authored
Create lab-01-installation-and-basics.md
Fixes #385
1 parent d610f91 commit 2b8a602

File tree

1 file changed

+226
-0
lines changed

1 file changed

+226
-0
lines changed
Lines changed: 226 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,226 @@
1+
# Lab 1: Getting Started with Ollama - Installation and Basic Usage
2+
3+
## Objective
4+
In this lab, you will learn how to install Ollama, download your first model, and interact with it using the command-line interface. By the end of this lab, you'll understand the basics of running and chatting with local large language models.
5+
6+
## Prerequisites
7+
- A computer running macOS, Windows, or Linux
8+
- At least 8GB of RAM (16GB recommended)
9+
- At least 10GB of free disk space
10+
- Basic familiarity with the command line/terminal
11+
12+
## Estimated Time
13+
30-45 minutes
14+
15+
## Part 1: Installing Ollama
16+
17+
### Step 1: Download and Install Ollama
18+
19+
1. Visit [https://ollama.com/download](https://ollama.com/download)
20+
2. Download the installer for your operating system:
21+
- **macOS**: Download the `.dmg` file
22+
- **Windows**: Download the `.exe` installer
23+
- **Linux**: Use the installation script (see below)
24+
25+
#### For Linux users:
26+
```bash
27+
curl -fsSL https://ollama.com/install.sh | sh
28+
```
29+
30+
### Step 2: Verify Installation
31+
32+
After installation, open a terminal/command prompt and verify Ollama is installed:
33+
34+
```bash
35+
ollama --version
36+
```
37+
38+
You should see the version number displayed.
39+
40+
### Step 3: Check if Ollama is Running
41+
42+
Ollama typically starts automatically. To verify:
43+
44+
```bash
45+
ollama ps
46+
```
47+
48+
If Ollama isn't running, start it with:
49+
50+
```bash
51+
ollama serve
52+
```
53+
54+
## Part 2: Running Your First Model
55+
56+
### Step 1: Download a Model
57+
58+
Let's start with a smaller model that's great for learning. Download the `gemma3` model:
59+
60+
```bash
61+
ollama pull gemma3
62+
```
63+
64+
**Note**: This will download several gigabytes of data. The download time depends on your internet connection.
65+
66+
### Step 2: List Available Models
67+
68+
After the download completes, verify the model is available:
69+
70+
```bash
71+
ollama ls
72+
```
73+
74+
You should see `gemma3` in the list of available models.
75+
76+
## Part 3: Interacting with the Model
77+
78+
### Step 1: Start a Chat Session
79+
80+
Run the model in interactive mode:
81+
82+
```bash
83+
ollama run gemma3
84+
```
85+
86+
### Step 2: Have a Conversation
87+
88+
Try these prompts:
89+
90+
1. **Basic greeting**:
91+
```
92+
Hello! Can you introduce yourself?
93+
```
94+
95+
2. **Ask a question**:
96+
```
97+
What is the capital of France?
98+
```
99+
100+
3. **Request an explanation**:
101+
```
102+
Explain how photosynthesis works in simple terms.
103+
```
104+
105+
4. **Creative task**:
106+
```
107+
Write a haiku about artificial intelligence.
108+
```
109+
110+
### Step 3: Exit the Chat
111+
112+
To exit the interactive session, type:
113+
```
114+
/bye
115+
```
116+
117+
Or press `Ctrl+D` (macOS/Linux) or `Ctrl+C` (Windows).
118+
119+
## Part 4: Multiline Input
120+
121+
### Step 1: Using Multiline Mode
122+
123+
Start another chat session:
124+
```bash
125+
ollama run gemma3
126+
```
127+
128+
Try a multiline input using triple quotes:
129+
```
130+
"""Write a short story about a robot
131+
learning to paint. Make it
132+
heartwarming and inspirational."""
133+
```
134+
135+
## Part 5: Managing Models
136+
137+
### Step 1: View Model Information
138+
139+
Get detailed information about your model:
140+
141+
```bash
142+
ollama show gemma3
143+
```
144+
145+
### Step 2: Check Running Models
146+
147+
See which models are currently loaded in memory:
148+
149+
```bash
150+
ollama ps
151+
```
152+
153+
### Step 3: Stop a Model
154+
155+
To unload a model from memory:
156+
157+
```bash
158+
ollama stop gemma3
159+
```
160+
161+
### Step 4: Remove a Model (Optional)
162+
163+
If you want to free up disk space, you can remove a model:
164+
165+
```bash
166+
ollama rm gemma3
167+
```
168+
169+
**Note**: Don't do this if you want to continue with the remaining labs!
170+
171+
## Exercises
172+
173+
### Exercise 1: Model Comparison
174+
1. Pull another small model: `ollama pull llama3.2`
175+
2. Ask both models the same question and compare their responses
176+
3. Document the differences in style, accuracy, and response time
177+
178+
### Exercise 2: Use Cases
179+
For each of the following tasks, interact with the model and evaluate its performance:
180+
1. Code explanation (paste a simple Python function and ask it to explain)
181+
2. Language translation (translate a sentence to another language)
182+
3. Math problem solving (give it a word problem)
183+
4. Creative writing (ask for a poem or story)
184+
185+
### Exercise 3: Model Management
186+
1. Check how much disk space your models are using (hint: check `ollama ls`)
187+
2. Experiment with the `ollama ps` command while a model is running
188+
3. Stop and restart a model, observing the load time
189+
190+
## Lab Questions
191+
192+
Answer these questions based on your experience:
193+
194+
1. What is the size of the `gemma3` model you downloaded?
195+
2. How long did it take for the model to load the first time you ran it?
196+
3. What happens when you ask the model a question about current events?
197+
4. What are the advantages of running models locally versus using cloud-based APIs?
198+
5. What limitations did you notice when interacting with the model?
199+
200+
## Troubleshooting
201+
202+
### Issue: "ollama: command not found"
203+
- **Solution**: Restart your terminal or add Ollama to your PATH manually
204+
205+
### Issue: Model download is very slow
206+
- **Solution**: Check your internet connection; large models can take time to download
207+
208+
### Issue: "Out of memory" errors
209+
- **Solution**: Try a smaller model or close other applications to free up RAM
210+
211+
### Issue: Model responses are very slow
212+
- **Solution**: This is normal for larger models on systems without GPUs. Consider using a smaller model.
213+
214+
## Summary
215+
216+
In this lab, you learned how to:
217+
- Install Ollama on your system
218+
- Download and manage language models
219+
- Interact with models using the command-line interface
220+
- Use multiline input for complex prompts
221+
- Manage model resources (loading, stopping, removing)
222+
223+
## Next Steps
224+
225+
Continue to **Lab 2: Working with the CLI** to learn advanced command-line features and model management techniques.
226+

0 commit comments

Comments
 (0)