Data Extraction & Contextual Inference for MCNP Analysis
The first open-source framework combining LLMs and Knowledge Graphs for analyzing MCNP Particle Track Output (PTRAC) files
DECIMA transforms how nuclear engineers and researchers interact with MCNP simulation data. Instead of writing complex analysis scripts, simply ask questions in natural language:
Built on MCNPToolsPro: DECIMA uses MCNPToolsPro, an enhanced fork of MCNPTools by Los Alamos National Laboratory. We extend our deepest gratitude to LANL's MCNPTools team for their foundational C++/Python library.
Key Enhancements:
- ✅ MCNP 6.2/6.3 Filter Support: Complete support for PTRAC filters (
tally=,filter=,event=,type=) and combinations - ✅ Multiple PTRAC Formats: ASCII, Binary, and HDF5 format support
- ✅ Bug Fixes: Resolved critical parsing issues with filtered PTRAC files from original MCNPTools
See mcnptoolspro/README.md for technical details on filter support improvements.
"Display collision positions and energies deposited for the first 20 particle histories" "Plot the z-axis direction cosine (W) distribution of emitted source particles" "How many secondary photons are emitted and what is their process of termination?"
DECIMA's AI assistant OTACON will generate the Python code, execute it, and provide you with results and visualizations.
Ask questions in natural language - DECIMA generates and executes analysis code automatically. The interface shows example queries, model selection, and the friendly OTACON character ready to assist.
- 🗣️ Natural Language Queries - No complex scripting required
- 🧠 AI-Powered Analysis - Leverages OpenAI LLMs (gpt-4o, gpt-4o-mini)
- 📊 Automated Visualization - Generates plots and tables automatically
- �� Knowledge Graph Integration - Uses MCNP domain knowledge for accurate context
- 🌐 Web Interface - User-friendly Flask-based web app
- 🐍 Python API - Programmatic access for integration and automation
- 🔍 Verbose Debug Mode - Inspect full LLM prompts and workflow
- 🎯 Demo Mode - Test without API key
DECIMA uses a modular multi-agent architecture:
| Agent | Role | Technology |
|---|---|---|
| 🤫 QUIET | Query interpretation & focus detection | Rule-based NLP |
| 🧠 EMMA | Knowledge Graph context extraction | Neo4j |
| 👨💻 OTACON | LLM reasoning & code generation | OpenAI API |
| ⚡ EVA | Secure Python code execution sandbox | RestrictedPython |
| 📡 CAMPBELL | System orchestration & workflow | LangGraph |
Workflow:
User Query → QUIET → EMMA → OTACON → EVA → Results
↓ ↓ ↓ ↓
Focus KG Context Code Execution
DECIMA can be used in two ways:
git clone https://github.com/quentinducasse/decima.git
cd decima
cp .env.docker.example .env.docker
# Edit .env.docker to add your OpenAI API key
docker compose up -d
# Wait ~15 seconds for Neo4j to start
docker compose exec app python kg/loader/neo4j_loader.pyAccess: http://localhost:5050
git clone https://github.com/quentinducasse/decima.git
cd decima
cp .env.docker.example .env.docker
# Edit .env.docker to add your OpenAI API key
python install_dev.py # Compiles mcnptoolspro automatically
docker compose up -d neo4j # Start Neo4j only
python examples/demo_mode_standalone.pyUsage:
from modules.campbell import CampbellOrchestrator
orchestrator = CampbellOrchestrator()
result = orchestrator.process_query(
ptrac_path='data/ptrac_samples/basic_ptrac_example_decima_ascii.ptrac',
query='Plot energy distribution of neutrons',
use_context=True
)
print(result['response']) # Natural language explanation
print(result['code']) # Generated Python code
print(result['execution_result']) # Execution output📖 Full installation guide: See INSTALL.md
To use DECIMA's full capabilities, you need an OpenAI API key:
- Get one here: OpenAI Platform
- Supported models: gpt-4o-mini (default), gpt-4o
- Cost: ~10 queries for $0.01 with gpt-4o-mini
- Demo mode: Available without API key (returns fixed example)
- Event Filtering: Source (SRC), Collision (COL), Bank (BNK), Surface (SUR), Termination (TER)
- Particle Data: Position (X,Y,Z), Energy, Time, Direction (U,V,W), Weight
- Particle Types: Neutrons, photons, electrons, and more
- Visualizations: Histograms or print results
- Statistics: Counts, averages
- Advanced: Cell tracking, surface crossings, termination analysis
Basic Analysis:
Show the first 10 source particles with their positions and energies
Visualization:
Plot the energy distribution of collision events
decima/
├── modules/ # Core agents
│ ├── quiet.py # Query interpretation
│ ├── emma.py # Knowledge Graph manager
│ ├── otacon.py # LLM engine
│ ├── eva.py # Code execution sandbox
│ └── campbell.py # Workflow orchestrator
├── kg/ # Knowledge Graph
│ ├── triplets/ # MCNP domain knowledge (RDF)
│ └── loader/ # Neo4j loader
├── frontend/ # Web interface (Flask)
├── examples/ # Usage examples
│ ├── demo_mode_standalone.py # Standalone demo
│ ├── full_api_mode.py # Full API with Neo4j
│ ├── test_mcnptools_direct.py # Test MCNPToolsPro compilation
│ ├── test_decima_with_mcnptools.py # Test DECIMA with MCNPToolsPro
│ └── README.md # Examples documentation
├── tests/ # Test suite
├── data/ # Sample PTRAC files
├── mcnptoolspro/ # MCNPToolsPro library (compiled during install)
├── doc/ # Documentation
│ ├── architecture_decima.md # Architecture details (FR)
│ ├── DECIMA Project Technical Documentation.md # Technical guide (EN)
│ ├── Documentation Technique du Projet DECIMA.md # Technical guide (FR)
│ ├── DECIMA Project User Documentation.md # User guide (EN)
│ └── Documentation Utilisateur du Projet DECIMA.md # User guide (FR)
├── pyproject.toml # Python package configuration
├── setup.py # Installation script
├── install_dev.py # Development installation script
├── docker-compose.yml # Docker deployment
├── app.py # Web app entry point
├── INSTALL.md # Installation guide
├── paper.md # Research paper (JOSS submission)
├── CONTRIBUTING.md # Contribution guidelines
└── README.md # This file
- Start DECIMA with Docker (see Quick Start)
- Open http://localhost:5050
- Click "Load PTRAC File" or use the sample file
- Enter your query in natural language (English or French)
- Choose your LLM model (gpt-4o-mini or gpt-4o)
- Toggle "Add context" to use Knowledge Graph
- Submit and view generated code + results
See examples/full_api_mode.py for a complete example:
from modules.campbell import CampbellOrchestrator
# Initialize
orchestrator = CampbellOrchestrator()
# Analyze
result = orchestrator.process_query(
ptrac_path='path/to/file.ptrac',
query='Your natural language question',
use_context=True # Use Knowledge Graph context
)
# Access results
print(result['response']) # Natural language explanation
print(result['code']) # Generated Python code
print(result['execution_result']) # Execution output and plots
print(result['logs']) # Workflow logsSee detailed workflow execution:
# Docker
docker compose run --rm --service-ports app python app.py -v
# Python package (examples include verbose output)
python examples/full_api_mode.pyOutput shows:
- QUIET focus detection
- EMMA Knowledge Graph entities
- OTACON LLM prompt and response
- EVA execution results
Test DECIMA without an OpenAI API key:
Setup: Set DEMO_MODE=true in .env.docker
What it does:
- Runs without external API calls
- Returns pre-written collision analysis example
- Useful for testing and validation
Limitations:
- Ignores your actual query
- Returns fixed response only
For full functionality: Set a valid OPENAI_API_KEY and DEMO_MODE=false
- Python 3.10+
- OpenAI API (gpt-4o, gpt-4o-mini)
- Neo4j 5.19 (Knowledge Graph)
- MCNPToolsPro (Enhanced PTRAC parsing with filter support)
- LangGraph (Agent orchestration)
- Flask (Web interface)
- RestrictedPython (Secure code execution)
mcnptoolspro- MCNP output file parsing (PTRAC, MCTAL, MESHTAL)openai- LLM interactionneo4j- Graph database driverlangchain-core- Agent frameworkmatplotlib- Visualizationnumpy- Numerical computingflask- Web frameworkpython-dotenv- Environment config
- Installation Guide: INSTALL.md
- API Examples: examples/README.md
- User Documentation (English): doc/DECIMA Project User Documentation.md
- Technical Documentation (English): doc/DECIMA Project Technical Documentation.md
- JOSS Submission: paper.md
DECIMA includes evaluation scripts in the tests/ directory.
Python Package Mode (Method 1):
# Test individual components (locally)
python tests/test_quiet.py # Query interpretation - Works ✓
python tests/test_eva.py # Code execution sandbox - Works ✓
python tests/test_emma.py # Knowledge Graph (requires Neo4j)
python tests/test_otacon_api.py # LLM code generation (requires API key)
python tests/test_campbell_workflow.py # Full workflow - mcnptools unavailableDocker Mode (Method 2) - RECOMMENDED:
# Run tests inside Docker container with full functionality
docker compose exec app python tests/test_quiet.py
docker compose exec app python tests/test_eva.py
docker compose exec app python tests/test_emma.py # Requires Neo4j + KG loaded
docker compose exec app python tests/test_otacon_api.py # Requires API key configured
docker compose exec app python tests/test_campbell_workflow.py # Full execution with mcnptools ✓Why Docker mode is better for testing:
- All tests work automatically once services are running (
docker compose up -d+ Knowledge Graph loaded) - Code execution works fully because mcnptools is available in the Docker container
- Neo4j and all dependencies are pre-configured
- Use the web interface at http://localhost:5050 for interactive testing with results
Recommendation: For full testing with code execution, use Docker mode with all services running.
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
If you use DECIMA in your research, please cite:
@software{decima2025,
title = {DECIMA: Data Extraction \& Contextual Inference for MCNP Analysis},
author = {Ducasse, Quentin and Almuhisen, Feda},
year = {2025},
url = {https://github.com/quentinducasse/decima},
doi = {10.5281/zenodo.17953846},
version = {1.3.2},
license = {Apache-2.0}
}For academic papers and publications, use the following citation format:
IEEE Style:
Q. Ducasse and F. Almuhisen, "DECIMA: Data Extraction & Contextual Inference for MCNP Analysis,"
Version 1.3.2, 2025. [Online]. Available: https://github.com/quentinducasse/decima. doi: 10.5281/zenodo.17953846
APA Style:
Ducasse, Q., & Almuhisen, F. (2025). DECIMA: Data Extraction & Contextual Inference for MCNP Analysis
(Version 1.3.2) [Computer software]. https://github.com/quentinducasse/decima. https://doi.org/10.5281/zenodo.17953846
Nature Style:
Ducasse, Q. & Almuhisen, F. DECIMA: Data Extraction & Contextual Inference for MCNP Analysis.
https://github.com/quentinducasse/decima (2025). https://doi.org/10.5281/zenodo.17953846
Quentin Ducasse
- LinkedIn: Quentin Ducasse
- GitHub: @quentinducasse
Feda Almuhisen
- LinkedIn: Feda Almuhisen
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Key points:
- ✅ Free to use, modify, and distribute
- ✅ Commercial use allowed
- ✅ Patent rights granted
⚠️ Must include license and copyright notice⚠️ Must state significant changes made
- MCNPTools by Los Alamos National Laboratory
- OpenAI for GPT models
- Neo4j for graph database technology
- The nuclear engineering and AI research communities
- MCNPTools: GitHub
- OpenAI Python SDK: GitHub
- LangGraph: Documentation
- Neo4j: Official Site
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: See author LinkedIn profiles
- ✅ Web interface with Flask
- ✅ Python package with automatic mcnptoolspro compilation
- ✅ Knowledge Graph integration
- ✅ Docker deployment
- ✅ Demo mode
- ✅ Improved documentation and examples
- 🔄 Support for additional LLM providers (Anthropic, local models)
- 🔄 Enhanced visualization capabilities
- 🔄 MCTAL file support
- 🔄 Batch analysis mode
- 🔄 Export to common formats (CSV, Excel, HDF5)
- 🔄 Plugin system for custom analysis
- 🔄 REST API for remote access
- PTRAC Format: Currently optimized for standard MCNP6 PTRAC output
- Memory: Large PTRAC files (>1GB) may require batching
- LLM Accuracy: Generated code quality depends on query clarity
- Neo4j Required: Full functionality requires Neo4j running
- Windows: Some path handling may need adjustments
See GitHub Issues for current bugs and feature requests.
Made with ❤️ for researchers in nuclear physics and the nuclear engineering community
