Skip to content

Commit 965c177

Browse files
authored
Fix docs errors
Signed-off-by: Michael Yuan <[email protected]>
1 parent 5cb1ae8 commit 965c177

File tree

1 file changed

+60
-56
lines changed

1 file changed

+60
-56
lines changed

README.md

Lines changed: 60 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ Or, if you want to run the services directly on your own computer:
2525

2626
- **Python 3.8+** 🐍
2727
- **Rust Compiler and cargo tools** 🦀
28-
- **Rust Compiler and cargo tools** 🦀
2928

3029
---
3130

@@ -366,8 +365,6 @@ Rust_coder_lfx/
366365
│ ├── llm_tools.py # Tools for LLM interactions
367366
│ ├── load_data.py # Data loading utilities
368367
│ ├── main.py # FastAPI application & endpoints
369-
│ ├── mcp_server.py # MCP server implementation
370-
│ ├── mcp_service.py # Model-Compiler-Processor service
371368
│ ├── mcp_tools.py # MCP-specific tools
372369
│ ├── prompt_generator.py # LLM prompt generation
373370
│ ├── response_parser.py # Parse LLM responses into files
@@ -378,15 +375,6 @@ Rust_coder_lfx/
378375
│ └── project_examples/ # Project examples for vector search
379376
├── docker-compose.yml # Docker Compose configuration
380377
├── Dockerfile # Docker configuration
381-
├── examples/ # Example scripts for using the API
382-
│ ├── compile_endpoint.txt # Example for compile endpoint
383-
│ ├── compile_and_fix_endpoint.txt # Example for compile-and-fix endpoint
384-
│ ├── mcp_client_example.py # Example MCP client usage
385-
│ └── run_mcp_server.py # Example for running MCP server
386-
├── templates/ # Prompt templates
387-
│ └── project_prompts.txt # Templates for project generation
388-
├── mcp-proxy-config.json # MCP proxy configuration
389-
├── parse_and_save_qna.py # Q&A parsing utility
390378
├── requirements.txt # Python dependencies
391379
└── .env # Environment variables
392380
```
@@ -404,7 +392,7 @@ Compilation Feedback Loop: Automatically compiles, detects errors, and fixes the
404392

405393
File Parsing: Converts LLM responses into project files with `response_parser.py`.
406394

407-
#### Architecture
395+
### Architecture
408396

409397
REST API Interface (app/main.py): FastAPI application exposing HTTP endpoints for project generation, compilation, and error fixing.
410398

@@ -416,60 +404,64 @@ LLM Integration (app/llm_client.py): Communicates with LLM APIs (like Gaia nodes
416404

417405
Compilation Pipeline (app/compiler.py): Handles Rust code compilation, error detection, and provides feedback for fixing.
418406

419-
#### Process Flow
407+
### Process Flow
420408

421409
Project Generation:
422410

423-
User provides a description and requirements
424-
System creates a prompt using templates (templates/project_prompts.txt)
425-
LLM generates a complete Rust project
426-
Response is parsed into individual files (app/response_parser.py)
427-
Project is compiled to verify correctness
411+
* User provides a description and requirements
412+
* System creates a prompt using templates
413+
* LLM generates a complete Rust project
414+
* Response is parsed into individual files (`app/response_parser.py`)
415+
* Project is compiled to verify correctness
428416

429417
Error Fixing:
430418

431-
System attempts to compile the provided code
432-
If errors occur, they're extracted and analyzed
433-
Vector search may find similar past errors
434-
LLM receives the errors and original code to generate fixes
435-
Process repeats until successful or max attempts reached
419+
* System attempts to compile the provided code
420+
* If errors occur, they're extracted and analyzed
421+
* Vector search may find similar past errors
422+
* LLM receives the errors and original code to generate fixes
423+
* Process repeats until successful or max attempts reached
436424

437425
---
438426

439-
## 📊 Adding to the Vector Database
427+
## 📊 Enhancing Performance with Vector Search
440428

441429
The system uses vector embeddings to find similar projects and error examples, which helps improve code generation quality. Here's how to add your own examples:
442430

443431
### 🔧 Creating Vector Collections
444432

445-
First, you need to create the necessary collections in Qdrant using these curl commands:
433+
First, you need to create the necessary collections in Qdrant using these `curl` commands:
446434

447435
```bash
448-
# Create project_examples collection with 1536 dimensions (default)
436+
# Create project_examples collection with 768 dimensions (default)
449437
curl -X PUT "http://localhost:6333/collections/project_examples" \
450438
-H "Content-Type: application/json" \
451439
-d '{
452440
"vectors": {
453-
"size": 1536,
441+
"size": 768,
454442
"distance": "Cosine"
455443
}
456444
}'
457445

458-
# Create error_examples collection with 1536 dimensions (default)
446+
# Create error_examples collection with 768 dimensions (default)
459447
curl -X PUT "http://localhost:6333/collections/error_examples" \
460448
-H "Content-Type: application/json" \
461449
-d '{
462450
"vectors": {
463-
"size": 1536,
451+
"size": 768,
464452
"distance": "Cosine"
465453
}
466454
}'
467455
```
468-
Note: If you've configured a different embedding size via ```LLM_EMBED_SIZE``` environment variable, replace 1536 with that value.
469456

470-
### Method 1: Using Python API Directly
457+
> Note: If you've configured a different embedding size via `LLM_EMBED_SIZE` environment variable, replace 768 with that value.
458+
459+
### 🗂️ Adding Data to Vector Collections
460+
461+
#### Method 1: Using Python API Directly
462+
463+
For Project Examples
471464

472-
#### For Project Examples
473465
```python
474466
from app.llm_client import LlamaEdgeClient
475467
from app.vector_store import QdrantStore
@@ -503,6 +495,7 @@ vector_store.add_item(
503495
```
504496

505497
For Error Examples:
498+
506499
```python
507500
from app.llm_client import LlamaEdgeClient
508501
from app.vector_store import QdrantStore
@@ -533,12 +526,15 @@ vector_store.add_item(
533526
)
534527
```
535528

536-
### Method 2: Adding Multiple Examples from JSON Files
529+
#### Method 2: Adding Multiple Examples from JSON Files
530+
537531
Place JSON files in the appropriate directories:
538532

539-
Project examples: ```project_examples```
540-
Error examples: ```error_examples```
541-
Format for project examples (with optional project_files field):
533+
* Project examples: `data/project_examples`
534+
* Error examples: `data/error_examples`
535+
536+
Format for project examples (with optional `project_files` field):
537+
542538
```json
543539
{
544540
"query": "Description of the project",
@@ -549,7 +545,9 @@ Format for project examples (with optional project_files field):
549545
}
550546
}
551547
```
548+
552549
Format for error examples:
550+
553551
```
554552
{
555553
"error": "Rust compiler error message",
@@ -558,46 +556,51 @@ Format for error examples:
558556
"example": "// Code example showing the fix (optional)"
559557
}
560558
```
559+
561560
Then run the data loading script:
561+
562562
```
563563
python -c "from app.load_data import load_project_examples, load_error_examples; load_project_examples(); load_error_examples()"
564564
```
565565

566-
### Method 3: Using the ```parse_and_save_qna.py``` Script
566+
#### Method 3: Using the `parse_and_save_qna.py` Script
567+
567568
For bulk importing from a Q&A format text file:
568569

569-
Place your Q&A pairs in a text file with format similar to ```QnA_pair.txt```
570-
Modify the ```parse_and_save_qna.py``` script to point to your file
570+
Place your Q&A pairs in a text file with format similar to `QnA_pair.txt`
571+
Modify the `parse_and_save_qna.py` script to point to your file.
571572
Run the script:
573+
572574
```
573575
python parse_and_save_qna.py
574576
```
575577

576578
## ⚙️ Environment Variables for Vector Search
577-
The SKIP_VECTOR_SEARCH environment variable controls whether the system uses vector search:
578579

579-
```SKIP_VECTOR_SEARCH```=true - Disables vector search functionality
580-
```SKIP_VECTOR_SEARCH```=false (or not set) - Enables vector search
581-
In your current .env file, you have:
582-
```
583-
SKIP_VECTOR_SEARCH=true
584-
```
585-
This means vector search is currently disabled. To enable it:
586-
- Change this value to false or remove the line completely
580+
The `SKIP_VECTOR_SEARCH` environment variable controls whether the system uses vector search:
581+
582+
* `SKIP_VECTOR_SEARCH=true` - Disables vector search functionality
583+
* `SKIP_VECTOR_SEARCH=false` (or not set) - Enables vector search
584+
585+
By default, vector search is disabled. To enable it:
586+
587+
- Change to `SKIP_VECTOR_SEARCH=false` in your `.env` file
587588
- Ensure you have a running Qdrant instance (via Docker Compose or standalone)
588589
- Create the collections as shown above
589590

590591
## 🤝 Contributing
592+
591593
Contributions are welcome! This project uses the Developer Certificate of Origin (DCO) to certify that contributors have the right to submit their code. Follow these steps:
592594

593-
Fork the repository
594-
Create your feature branch (git checkout -b feature/amazing-feature)
595-
Make your changes
596-
Commit your changes with a sign-off (git commit -s -m 'Add some amazing feature')
597-
Push to the branch (git push origin feature/amazing-feature)
598-
Open a Pull Request
595+
* Fork the repository
596+
* Create your feature branch `git checkout -b feature/amazing-feature`
597+
* Make your changes
598+
* Commit your changes with a sign-off `git commit -s -m 'Add some amazing feature'`
599+
* Push to the branch `git push origin feature/amazing-feature`
600+
* Open a Pull Request
601+
602+
The `-s` flag will automatically add a signed-off-by line to your commit message:
599603

600-
The -s flag will automatically add a signed-off-by line to your commit message:
601604
```
602605
Signed-off-by: Your Name <[email protected]>
603606
```
@@ -607,6 +610,7 @@ This certifies that you wrote or have the right to submit the code you're contri
607610
---
608611

609612
## 📜 License
613+
610614
Licensed under [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html).
611615

612616

0 commit comments

Comments
 (0)