diff --git a/INSTALLATION.md b/INSTALLATION.md new file mode 100644 index 0000000..a648a19 --- /dev/null +++ b/INSTALLATION.md @@ -0,0 +1,313 @@ +# Installation and Setup Guide + +This guide covers the detailed installation and setup process for the Racktables to NetBox migration tool. + +## Prerequisites + +Before starting, ensure you have: + +1. Python 3.6 or higher installed +2. Access to your Racktables MySQL/MariaDB database +3. A running NetBox instance (version 4.2.6 or higher) with API access +4. Administrative privileges on the NetBox instance to add custom fields + +## Automated Setup (Recommended) + +The tool includes a setup script that automates the installation process: + +```bash +# Clone the repository +git clone https://github.com/enoch85/racktables-to-netbox.git +cd racktables-to-netbox + +# Make the setup script executable +chmod +x setup_dev.sh + +# Run automated setup +./setup_dev.sh +``` + +The `setup_dev.sh` script has several options: + +- `--netbox`: Sets up a complete NetBox Docker environment with proper configuration +- `--gitclone`: Configures minimal requirements after a git clone (default if no options specified) +- `--package`: Sets up for package distribution +- `--help`: Displays help message + +For a complete setup with NetBox included: + +```bash +./setup_dev.sh --netbox +``` + +This will: +1. Set up a virtual environment +2. Install all dependencies +3. Create a NetBox Docker installation with proper configuration +4. Generate secure credentials +5. Configure NetBox with MAX_PAGE_SIZE set to 0 +6. Create symlinks for development +7. Save configuration for easy use + +## Quick Manual Installation + +```bash +# Clone the repository +git clone https://github.com/enoch85/racktables-to-netbox.git +cd racktables-to-netbox + +# Create and activate a virtual environment +python3 -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate + +# Install dependencies +pip install -r requirements.txt + +# Configure connection settings in migration/config.py or use environment variables +# (See Configuration section below) + +# Run the migration +python migrate.py --site "YourSiteName" # Optional site filtering +``` + +## Detailed Installation Steps + +### 1. Clone the Repository + +```bash +git clone https://github.com/enoch85/racktables-to-netbox.git +cd racktables-to-netbox +``` + +### 2. Set Up a Python Environment + +Modern Python distributions like Ubuntu 24.04 use externally managed environments (PEP 668) which prevent installing packages directly with pip. You have two options: + +#### Option A: Use a Virtual Environment (Recommended) + +```bash +# Make sure you have the required packages +sudo apt install python3-full python3-venv + +# Create and activate a virtual environment +python3 -m venv venv +source venv/bin/activate # On Windows, use: venv\Scripts\activate +``` + +#### Option B: Use pipx + +If you prefer to use pipx (which manages isolated environments for applications): + +```bash +# Install pipx if not already installed +sudo apt install pipx +pipx ensurepath + +# Create a directory for the tool to operate in +mkdir -p ~/.local/pipx/venvs/racktables-netbox +cd ~/.local/pipx/venvs/racktables-netbox + +# Clone the repository here +git clone https://github.com/enoch85/racktables-to-netbox.git . + +# Install dependencies in this isolated environment +pipx run --pip-args="-r requirements.txt" python -c "" +``` + +### 3. Install Dependencies + +With your virtual environment activated (if using Option A): + +```bash +pip install -r requirements.txt +``` + +### 4. Configure NetBox MAX_PAGE_SIZE Setting + +This setting is required for the migration tool to properly fetch all objects in a single request. + +```bash +# First, edit the script to set your NetBox Docker path +nano scripts/max-page-size-check.sh + +# Make the script executable +chmod +x scripts/max-page-size-check.sh + +# Run the script +./scripts/max-page-size-check.sh +``` + +This will check if the MAX_PAGE_SIZE is already set to 0 and offer to update it if needed. + +### 5. Configure Database and API Connection + +Edit `migration/config.py` to set your connection parameters: + +```python +# NetBox API connection settings +NB_HOST = 'localhost' +NB_PORT = 8000 +NB_TOKEN = '0123456789abcdef0123456789abcdef01234567' +NB_USE_SSL = False + +# Database connection parameters +DB_CONFIG = { + 'host': '10.248.48.4', + 'port': 3306, + 'user': 'root', + 'password': 'secure-password', + 'db': 'test1', + 'charset': 'utf8mb4', + 'cursorclass': DictCursor +} +``` + +Alternatively, you can use environment variables: + +```bash +# NetBox connection +export NETBOX_HOST=localhost +export NETBOX_PORT=8000 +export NETBOX_TOKEN=0123456789abcdef0123456789abcdef01234567 +export NETBOX_USE_SSL=False + +# Database connection +export RACKTABLES_DB_HOST=10.248.48.4 +export RACKTABLES_DB_PORT=3306 +export RACKTABLES_DB_USER=root +export RACKTABLES_DB_PASSWORD=secure-password +export RACKTABLES_DB_NAME=test1 +``` + +### 6. Run the Migration + +Basic usage with a specific site: +```bash +python migrate.py --site "YourSiteName" +``` + +Basic usage with a specific tenant: +```bash +python migrate.py --tenant "YourTenantName" +``` + +Combining site and tenant filters: +```bash +python migrate.py --site "YourSiteName" --tenant "YourTenantName" +``` + +Other migration options: +```bash +# Run only basic migration (no extended components) +python migrate.py --basic-only + +# Run only extended migration components +python migrate.py --extended-only + +# Skip setting up custom fields +python migrate.py --skip-custom-fields + +# Use custom configuration file +python migrate.py --config your_config.py +``` + +## Package Installation (Optional) + +Only necessary if you want the tool available system-wide: + +```bash +# Install in development mode (editable) +pip install -e . + +# Or install normally +pip install . + +# Then run using the command +migrate-racktables --site "YourSiteName" +``` + +## Troubleshooting + +### Common Issues + +1. **Database Connection Issues** + + If you encounter database connection problems, check: + - Database credentials in `config.py` + - Network connectivity to the database server + - Database server is running and accessible + - Firewall rules allowing connections to the database port + + Try connecting with a MySQL client to verify credentials. + +2. **NetBox API Connection Issues** + + If you have problems connecting to NetBox: + - Verify the API token is valid and has appropriate permissions + - Check network connectivity to the NetBox server + - Ensure the API is enabled in NetBox settings + - Confirm your NetBox version is 4.2.6 or higher + + Test with a simple API call: + ```bash + curl -H "Authorization: Token YOUR_TOKEN" http://your-netbox-host:port/api/ + ``` + +3. **Memory or Performance Issues** + + If the script runs out of memory or is too slow: + - Try running parts of the migration by adjusting the boolean flags in `config.py` + - Increase your Python process memory limit if possible + - Run the script on a machine with more resources + - Consider filtering by site with the `--site` parameter + - Consider filtering by tenant with the `--tenant` parameter + +## Post-Migration Verification + +After migration completes, verify: + +1. Device counts match between Racktables and NetBox +2. VLANs and IP prefixes are correctly defined +3. Interfaces are properly connected +4. IP addresses are correctly assigned +5. Parent-child relationships are maintained +6. Custom fields are populated with the right data +7. Tenant associations are correct (if using tenant filtering) + +## Setup Script Details + +The included `setup_dev.sh` script provides several useful features: + +### Setting up NetBox (`--netbox`) + +When run with the `--netbox` option, the script: +- Generates secure credentials for NetBox and PostgreSQL +- Clones the NetBox Docker repository +- Creates a Docker Compose override with proper settings +- Sets MAX_PAGE_SIZE=0 required for migration +- Creates admin user and API token +- Configures the migration tool to use the local NetBox + +### Basic Setup (`--gitclone`) + +This is the default mode and sets up: +- Python virtual environment +- Required dependencies +- Symlinks for development +- Package in development mode + +### Packaging (`--package`) + +Sets up the environment for creating distributable packages: +- Builds Python package +- Creates necessary packaging files +- Prepares for distribution via PyPI + +## Getting Help + +If you encounter issues not covered in this guide: + +1. Check the error logs in the `errors` file created during migration +2. Examine the NetBox logs for API-related issues +3. Run the migration with increased verbosity +4. Open an issue on the GitHub repository with details about your problem diff --git a/README.md b/README.md index d0cc2a2..1fedf2c 100644 --- a/README.md +++ b/README.md @@ -1,40 +1,273 @@ -# racktables-to-netbox +# Racktables to NetBox Migration Tool -Scripts to export Racktables data, accessible through a SQL connection, into a [Netbox](https://github.com/netbox-community/netbox/) instance, accessible at a URL. An easy way to test NB is with [netbox-docker](https://github.com/netbox-community/netbox-docker). Some benefits of Netbox are a strictly enforced naming and relationship hierarchy, custom scripts and reports, easy REST API with many wrappers [like this one](https://github.com/jagter/python-netbox). The `migrate.py` script will transfer: -- Racks at sites -- Device locations in racks and reservations -- All unracked stuff, notably VMs and clusters -- Parent child relationships like servers in chassises, patch panels in patch panels -- IPs, networks, VLANs -- Interfaces and their associated IP. Note that if an "OS interface" in "IP addresses" is same as "local name" in "ports and links," the interface is not duplicated -- Connections between interfaces really the 'ports and links' catagory -- Tags, labels, asset numbers +A modular Python package for migrating data from Racktables to NetBox. This tool provides comprehensive migration of network infrastructure data with extended features for a complete data transfer experience. -## Files: -**migrate.py** +## Features -Migrate data from RT to NB. Meant to be run once without interuption, although some bools exist to skip steps. -Steps that depend on others create cached data on disk, but the best procedure is to fully run once on an empty NB instance. For certain interfaces, names are capitalized or have string replacement. See comments for details or to turn off. If doing debugging and not running the script once, make sure to set `MAX_PAGE_SIZE=0` in `env/netbox.env` so that page fetch limits are disregarded. +- **Comprehensive Migration**: Transfer all your Racktables data to NetBox +- **Modular Architecture**: Maintainable and extensible codebase for easier updates +- **Site Filtering**: Restrict migration to specific sites when needed +- **Tenant Filtering**: Restrict migration to specific tenants and associate objects with tenants +- **Component Selection**: Choose which components to migrate with flexible flags +- **Custom Fields**: Automatic setup of required custom fields in NetBox +- **Extended Data Support**: + - Available subnet detection and creation + - Patch cable connections + - File attachments + - Virtual services + - NAT mappings + - Load balancer configurations + - Monitoring data references + - IP ranges -Python package requirements: `python3 -m pip install python-netbox python-slugify` +## Prerequisites -**custom_fields.yml** +Before starting, ensure you have: -The file to supply to the Netbox instance for custom fields. Thrse fields are expected by the migrate script and must be there. +1. Python 3.6 or higher installed +2. Access to your Racktables MySQL/MariaDB database +3. A running NetBox instance (version 4.2.6 or higher) with API access +4. Administrative privileges on the NetBox instance to add custom fields -**vm.py** +## Installation -Update the uniquely named VMs in NB with memory, disk and cpu data from RHEVM instances. Because two VMs can be in separate clusters with the same name and there is no mapping between RT cluster names and RHEVM cluster names, any not uniquely named VM is ignored. -Code is there to compare NICs and IPs as well. +### Automated Setup (Recommended) -Python package requirements `python3 -m pip install python-netbox bs4` +The tool includes a setup script that automates the installation process: -**free.py** +```bash +# Clone the repository +git clone https://github.com/enoch85/racktables-to-netbox.git +cd racktables-to-netbox -List the number of free IP addresses in NB based on the tags on prefixes. +# Make the setup script executable +chmod +x setup_dev.sh -Python package requirements `python3 -m pip install python-netbox` +# Run automated setup +./setup_dev.sh +``` -## Notes on python-netbox: -- As of July 2021 the pip code is not up to date to the Github repo, so you must manually update the `dcim.py` file's method `create_interface_connection` to match the up to date one on Github. -- As of July 2021 [this PR](https://github.com/jagter/python-netbox/pull/49) hasn't been merged, so the `get_device_bays` method is not yet in `dcim.py` and must be added manually. +The `setup_dev.sh` script has several options: + +- `--netbox`: Sets up a complete NetBox Docker environment with proper configuration +- `--gitclone`: Configures minimal requirements after a git clone (default if no options specified) +- `--package`: Sets up for package distribution +- `--help`: Displays help message + +For a complete setup with NetBox included: + +```bash +./setup_dev.sh --netbox +``` + +### Manual Installation + +```bash +# Clone the repository +git clone https://github.com/enoch85/racktables-to-netbox.git +cd racktables-to-netbox + +# Create and activate a virtual environment +python3 -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate + +# Install dependencies +pip install -r requirements.txt +``` + +## Configuration + +Edit the configuration in `migration/config.py`: + +```python +# NetBox API connection settings - can be overridden with environment variables +NB_HOST = os.environ.get('NETBOX_HOST', 'localhost') +NB_PORT = int(os.environ.get('NETBOX_PORT', '8000')) +NB_TOKEN = os.environ.get('NETBOX_TOKEN', 'your-api-token') +NB_USE_SSL = os.environ.get('NETBOX_USE_SSL', 'False').lower() in ('true', '1', 'yes') + +# Database connection parameters - can be overridden with environment variables +DB_CONFIG = { + 'host': os.environ.get('RACKTABLES_DB_HOST', 'your-racktables-db-host'), + 'port': int(os.environ.get('RACKTABLES_DB_PORT', '3306')), + 'user': os.environ.get('RACKTABLES_DB_USER', 'your-db-username'), + 'password': os.environ.get('RACKTABLES_DB_PASSWORD', 'your-db-password'), + 'db': os.environ.get('RACKTABLES_DB_NAME', 'racktables-db-name'), + 'charset': 'utf8mb4', + 'cursorclass': DictCursor +} + +# Migration flags - control which components are processed +CREATE_VLAN_GROUPS = True +CREATE_VLANS = True +# ... additional flags +``` + +Alternatively, you can use environment variables: + +```bash +# NetBox connection +export NETBOX_HOST=localhost +export NETBOX_PORT=8000 +export NETBOX_TOKEN=your-api-token +export NETBOX_USE_SSL=False + +# Database connection +export RACKTABLES_DB_HOST=your-racktables-db-host +export RACKTABLES_DB_PORT=3306 +export RACKTABLES_DB_USER=your-db-username +export RACKTABLES_DB_PASSWORD=your-db-password +export RACKTABLES_DB_NAME=racktables-db-name +``` + +### Important: Configure NetBox MAX_PAGE_SIZE + +This setting is required for the migration tool to properly fetch all objects in a single request: + +```bash +# Make the script executable +chmod +x scripts/max-page-size-check.sh + +# Edit the script to set your NetBox Docker path +nano scripts/max-page-size-check.sh + +# Run the script +./scripts/max-page-size-check.sh +``` + +## Usage + +### Basic Migration + +```bash +# Run setup to create custom fields (only needed once) +python migration/set_custom_fields.py + +# Run the migration +python migration/migrate.py +``` + +### Advanced Options + +```bash +# Migrate data for a specific site only +python migration/migrate.py --site "YourSiteName" + +# Migrate data with a specific tenant +python migration/migrate.py --tenant "YourTenantName" + +# Migrate data for a specific site and tenant +python migration/migrate.py --site "YourSiteName" --tenant "YourTenantName" + +# Run only basic migration (no extended components) +python migration/migrate.py --basic-only + +# Run only extended migration components +python migration/migrate.py --extended-only + +# Skip setting up custom fields +python migration/migrate.py --skip-custom-fields + +# Use custom configuration file +python migration/migrate.py --config your_config.py +``` + +## Project Structure + +``` +racktables-to-netbox/ +├── migration/ # Main migration package +│ ├── __init__.py # Package initialization +│ ├── config.py # Global configuration settings +│ ├── custom_netbox.py # Compatibility wrapper for pynetbox +│ ├── db.py # Database connection and query helpers +│ ├── devices.py # Device creation and management +│ ├── interfaces.py # Interface creation and management +│ ├── ips.py # IP and network related functions +│ ├── migrate.py # Main migration script +│ ├── set_custom_fields.py # Custom fields setup +│ ├── sites.py # Site and rack related functions +│ ├── utils.py # Utility functions +│ ├── vlans.py # VLAN management functions +│ ├── vms.py # Virtual machine handling +│ ├── extended/ # Extended functionality modules +│ ├── __init__.py +│ ├── available_subnets.py # Available subnet detection +│ ├── files.py # File attachment migration +│ ├── ip_ranges.py # IP range generation +│ ├── load_balancer.py # Load balancing data +│ ├── monitoring.py # Monitoring data +│ ├── nat.py # NAT mappings +│ ├── patch_cables.py # Patch cable migration +│ └── services.py # Virtual services migration +├── scripts/ # Helper scripts +├── setup_dev.sh # Development environment setup +├── requirements.txt # Python dependencies +└── setup.py # Package setup script +``` + +## Key Features + +### Site and Tenant Filtering + +Restrict migration to a specific site and/or tenant: + +```bash +python migration/migrate.py --site "DataCenter1" --tenant "CustomerA" +``` + +This will: +1. Only migrate objects associated with the specified site +2. Associate all created objects with the specified tenant +3. Create the tenant if it doesn't exist in NetBox + +### Available Subnet Detection + +The tool automatically: +1. Identifies gaps in IP address space +2. Creates available prefixes in those gaps +3. Tags them with "Available" status for easy filtering + +### IP Range Generation + +The tool can create IP ranges based on: +1. Available subnets that it detects +2. Gaps between allocated IP addresses +3. Empty prefixes with no allocated IPs + +### Extended Data Migration + +1. **Patch Cables**: Migrates physical cable connections between devices +2. **Files**: Transfers file attachments from Racktables +3. **Virtual Services**: Migrates service configurations +4. **NAT**: Preserves Network Address Translation relationships +5. **Load Balancing**: Migrates load balancer configs +6. **Monitoring**: Transfers monitoring system references + +## Troubleshooting + +- Check the `errors` log file for detailed error messages +- Ensure MAX_PAGE_SIZE=0 is set in your NetBox configuration +- Verify database connectivity and permissions +- Make sure custom fields are properly created + +### Common Issues + +1. **Database Connection Issues** + - Verify credentials in `config.py` + - Check network connectivity to database server + - Ensure database port is accessible + +2. **NetBox API Connection Issues** + - Verify API token has appropriate permissions + - Check network connectivity to NetBox server + - Confirm API is enabled in NetBox settings + +3. **Memory or Performance Issues** + - Try running parts of the migration by adjusting flags in `config.py` + - Increase Python process memory limit + - Consider filtering by site with the `--site` parameter + +## License + +GNU General Public License v3.0 diff --git a/__init__.py b/__init__.py new file mode 100644 index 0000000..d06ca99 --- /dev/null +++ b/__init__.py @@ -0,0 +1,5 @@ +""" +Racktables to NetBox Migration Tool + +Package initialization for development mode. +""" diff --git a/custom_fields.yml b/custom_fields.yml deleted file mode 100644 index 219fab4..0000000 --- a/custom_fields.yml +++ /dev/null @@ -1,351 +0,0 @@ -VLAN_Domain_ID: - type: text - description: ID for VLAN Domain - required: true - weight: 0 - on_objects: - - ipam.models.VLANGroup -Prefix_Name: - type: text - description: Name for prefix - required: false - weight: 0 - on_objects: - - ipam.models.Prefix -Device_Label: - type: text - description: Label for device - required: false - weight: 0 - on_objects: - - dcim.models.Device -VM_Asset_No: - type: text - description: Asset number for VMs - required: false - weight: 0 - on_objects: - - virtualization.models.VirtualMachine -VM_Label: - type: text - description: Label for VMs - required: false - weight: 0 - on_objects: - - virtualization.models.VirtualMachine -VM_Interface_Type: - type: text - label: Custom type for VM interfaces - description: Enter type for VM interface - required: true - weight: 0 - on_objects: - - virtualization.models.VMInterface -Device_Interface_Type: - type: text - label: Custom type for interfaces - description: Enter type for interface - required: true - weight: 0 - on_objects: - - dcim.models.Interface -IP_Type: - type: text - label: Type - description: Type of ip - required: false - weight: 0 - on_objects: - - ipam.models.IPAddress -IP_Name: - type: text - label: Name - description: Name of ip - required: false - weight: 0 - on_objects: - - ipam.models.IPAddress -Interface_Name: - type: text - label: Interface Name - description: Name of interface for this IP - required: false - weight: 0 - on_objects: - - ipam.models.IPAddress -OEM_SN_1: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -HW_type: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -FQDN: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -SW_type: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -SW_version: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -number_of_ports: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -max_current_Ampers: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -power_load_percents: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -max_power_Watts: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -contact_person: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -flash_memory_MB: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -DRAM_MB: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -CPU_MHz: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -OEM_SN_2: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Support_Contract_Expiration: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -HW_warranty_expiration: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -SW_warranty_expiration: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -UUID: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Hypervisor: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Height_units: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -Slot_number: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Sort_order: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -Mgmt_type: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -base_MAC_address: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -RAM_MB: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -Processor: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Total_Disk_GB: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -Processor_Count: - type: integer - required: false - weight: 0 - on_objects: - - dcim.models.Device -Service_Tag: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -PDU: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Circuit: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Contract_Number: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -DSP_Slot_1_Serial: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -DSP_Slot_2_Serial: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -DSP_Slot_3_Serial: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -DSP_Slot_4_Serial: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Chassis_Serial: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -SBC_PO: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Chassis_Model: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -Application_SW_Version: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -RHVM_URL: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -TIPC_NETID: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -CE_IP_Active: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -CE_IP_Standby: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -GPU_Serial_Number_1: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device -GPU_Serial_Number_2: - type: text - required: false - weight: 0 - on_objects: - - dcim.models.Device diff --git a/migrate.py b/migrate.py deleted file mode 100644 index bf3cc21..0000000 --- a/migrate.py +++ /dev/null @@ -1,1326 +0,0 @@ -from netbox import NetBox -import pymysql -from slugify import slugify -import pickle -import os -import time -import ipaddress -import random -import threading - -# Messy script to transfer Racktables SQL to NetBox -# Set "MAX_PAGE_SIZE=0" in "env/netbox.env" -# Add the printed custom_fields to initialization/custom_fields.yaml for all the fields from Racktables - -# Set all the bools to True and run once through for correct result, they were for debugging problems. Some info is cached with pickle, though - -CREATE_VLAN_GROUPS = True -CREATE_VLANS = True -# This also creates the clusters, which are needed for all devices -CREATE_MOUNTED_VMS = True -CREATE_UNMOUNTED_VMS = True -CREATE_RACKED_DEVICES = True -# Non racked devices depend on racked devices being created first -CREATE_NON_RACKED_DEVICES = True -# Interfaces rely on devices being created -CREATE_INTERFACES = True -# Interface connections depend on all interfaces created -CREATE_INTERFACE_CONNECTIONS = True -CREATE_IPV4 = True -CREATE_IPV6 = True -# IP space depends on interfaces being created -CREATE_IP_NETWORKS = True -CREATE_IP_ALLOCATED = True -CREATE_IP_NOT_ALLOCATED = True - - -# The length to exceed for a site to be considered a location (like an address) not a site -SITE_NAME_LENGTH_THRESHOLD = 10 - -# Each step may cache some data relevant to the next step. This will stop that from happening in the pickle load function -STORE_DATA = False - -rt_host = '127.0.0.1' -rt_port = 3306 -rt_user = 'root' -rt_db = 'test1' -connection = pymysql.connect(host=rt_host,user=rt_user,db=rt_db, port=rt_port) - -nb_host = '10.248.48.4' -nb_port = 8001 -nb_token = '0123456789abcdef0123456789abcdef01234567' - -netbox = NetBox(host=nb_host, port=nb_port, use_ssl=False, auth_token=nb_token) - -# This might not be all. Used for looking up non-racked items. Key names are for reference -objtype_id_names = { -1: "BlackBox", -2: "PDU", -3: "Shelf", -4: "Server", -5: "DiskArray", -7: "Router", -8: "Network Switch", -9: "Patch Panel", -10: "CableOrganizer", -11: "spacer", -12: "UPS", -13: "Modem", -15: "console", -447: "multiplexer", -798: "Network Security", -1502: "Server Chassis", -1398: "Power supply", -1503: "Network chassis", -1644: "serial console server", -1787: "Management interface", -50003: "Circuit", -50013: "SAN", -50044: "SBC", -50064: "GSX", -50065: "EMS", -50066: "PSX", -50067: "SGX", -50083: "SBC SWE", -# Don't create these with the unracked devices -# 1504: "VM", -# 1505: "VM Cluster", -# 1560: "Rack", -# 1561: "Row", -# 1562: "Location", -} - -# Manufacturer strings that exist in RT. Pulled out of "HW Type" to set as the manufacturer -racktables_manufacturers = {'Generic', 'Dell', 'MicroSoft', 'F5', 'ExtremeXOS', 'Netapp', 'Open Solaris', 'EMC', 'SlackWare', 'RH', 'FreeBSD', 'Edge-Core', 'SMC', 'Force10', 'Cyclades', 'IBM', 'Linksys', 'IronWare', 'Red', 'Promise', 'Extreme', 'QLogic', 'Marvell', 'SonicWall', 'Foundry', 'Juniper', 'APC', 'Raritan', 'Xen', 'NEC', 'Palo', 'OpenSUSE', 'Sun', 'noname/unknown', 'NetApp', 'VMware', 'Moxa', 'Tainet', 'SGI', 'Mellanox', 'Vyatta', 'Raisecom', 'Gentoo', 'Brocade', 'Enterasys', 'Dell/EMC', 'VMWare', 'Infortrend', 'OpenGear', 'Arista', 'Lantronix', 'Huawei', 'Avocent', 'SUSE', 'ALT_Linux', 'OpenBSD', 'Nortel', 'Univention', 'JunOS', 'MikroTik', 'NetBSD', 'Cronyx', 'Aten', 'Intel', 'PROXMOX', 'Ubuntu', 'Motorola', 'SciLin', 'Fujitsu', 'Fiberstore', '3Com', 'D-Link', 'Allied', 'Fortigate', 'Debian', 'HP', 'NETGEAR', 'Pica8', 'TPLink', 'Fortinet', 'RAD', 'NS-OS', 'Cisco', 'Alcatel-Lucent', 'CentOS', 'Hitachi'} - -# Pairs of parent objtype_id, then child objtype_id -parent_child_objtype_id_pairs = ( - (1502, 4),# Server inside a Server Chassis - (9, 9),# Patch Panel inside a Patch Panel -) - -# Some interfaces might have a name including "Eth", then have an IP with name "Ethernet" -# This dict will try to eliminate the difference to clean up the number of "Virtual" and "Other" type interfaces -# Convert the short name into the long name -# These only apply to objects of type "Router", 7, and "Network switch", 8 -interface_name_mappings = { - "Eth": "Ethernet", - "eth": "Ethernet", - "ethernet": "Ethernet", - - "Po": "Port-Channel", - "Port-channel": "Port-Channel", - - "BE": "Bundle-Ether", - "Lo": "Loopback", - "Loop": "Loopback", - "Vl": "VLAN", - "Vlan": "VLAN", - "Mg": "MgmtEth", - "Se": "Serial", - "Gi": "GigabitEthernet", - "Te": "TenGigE", - "Tw": "TwentyFiveGigE", - "Fo": "FortyGigE", - "Hu": "HundredGigE", -} - -parent_objtype_ids = [pair[0] for pair in parent_child_objtype_id_pairs] - -global_names = set() -global_tags = set() -global_devices = list() -global_device_roles = list() -global_manufacturers = list() -global_device_types = list() - -# When looking at all physical devices, store the SQL object_id and the to use in the Port table later -global_physical_object_ids = set() - -# Get the same info for non physical devices like VMs and Servers mounted in chassises to create their ports and linterfaces -# This is filled in during create_non_racked_devices function -global_non_physical_object_ids = set() - -# asset_no from racktables. Used to find the duplicates and add -1 -asset_tags = set() - -# object_id to "Chassis Serial" number if it exists -serials = dict() - -# Used for separating identical objects in different spots in the same rack -# Have not ca;lculated overflow yet, but 32-126 is a lot for one rack of 45/2 slots for items -first_ascii_character = " " - -# Turn the attr_id from table "Attribute" to a slugified string name for example 3 -> "FQDN" -slugified_attributes = dict() - -# Turn the uint_value for attr_id 2 in table "AttributeValue" into a string from the table "Dictionary" -hw_types = dict() - -def error_log(string): - with open("errors", "a") as error_file: - error_file.write(string + "\n") - -def pickleLoad(filename, default): - if os.path.exists(filename): - file = open(filename, 'rb') - data = pickle.load(file) - file.close() - return data - return default - -def pickleDump(filename, data): - if STORE_DATA: - file = open(filename, 'wb') - pickle.dump(data, file) - file.close() - -def getRackHeight(cursor, rackId): - cursor.execute("SELECT uint_value FROM AttributeValue WHERE object_id={} AND attr_id=27;".format(rackId)) - return cursor.fetchall()[0][0] - -# return the "HW Type" for the given racktables object -def get_hw_type(racktables_object_id): - global hw_types - cursor.execute("SELECT uint_value FROM AttributeValue WHERE object_id={} AND attr_id=2;".format(racktables_object_id)) - uint = cursor.fetchall() - return hw_types[uint[0][0]] if uint else None - -def getRowsAtSite(cursor, siteId): - rows = [] - cursor.execute("SELECT child_entity_id FROM EntityLink WHERE parent_entity_type='location' AND parent_entity_id=%s AND child_entity_type='row'",siteId) - rowIds = cursor.fetchall() - for rowId in rowIds: - cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE id=%s",rowId[0]) - rows += cursor.fetchall() - return rows - -def getRacksAtRow(cursor, rowId): - racks = [] - cursor.execute("SELECT child_entity_id FROM EntityLink WHERE parent_entity_type='row' AND parent_entity_id=%s AND child_entity_type='rack'",rowId) - rackIds = cursor.fetchall() - for rackId in rackIds: - cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE id=%s", rackId[0]) - racks += cursor.fetchall() - return racks - -def getAtomsAtRack(cursor, rackId): - cursor.execute("SELECT rack_id,unit_no,atom,state,object_id FROM RackSpace WHERE rack_id={};".format(rackId)) - return cursor.fetchall() - -def getTags(cursor, entity_realm, entity_id): - tags = [] - cursor.execute("SELECT tag_id FROM TagStorage WHERE entity_id={} AND entity_realm=\"{}\";".format(entity_id, entity_realm)) - for tag_id in [x[0] for x in cursor.fetchall()]: - cursor.execute("SELECT tag FROM TagTree WHERE id={};".format(tag_id)) - tags += cursor.fetchall() - return [{'name': tag[0]} for tag in tags] - -# Return a string -def getDeviceType(cursor, objtype_id): - cursor.execute("SELECT dict_key,dict_value FROM Dictionary WHERE dict_key={};".format(objtype_id)) - return cursor.fetchall()[0][1] - -def get_manufacturer_role_type(cursor, racktables_object_id, objtype_id, height, is_full_depth): - - global racktables_manufacturers - - original_device_type = getDeviceType(cursor, objtype_id) - manufacturer = original_device_type - - # Add the height to the type model, as well as the binary full_depth or not - hw_type = get_hw_type(racktables_object_id) - if hw_type: - # print("HW:", hw_type) - device_type = hw_type - - for racktables_manufacturer in racktables_manufacturers: - if device_type.startswith(racktables_manufacturer) or device_type.startswith(racktables_manufacturer+" "): - device_type = device_type.replace(racktables_manufacturer," ", 1).lstrip(" ") - manufacturer = racktables_manufacturer - else: - device_type = original_device_type - - device_type_model = "{}-{}U{}".format(device_type, height, "-full" if is_full_depth else "") - - return manufacturer, original_device_type, device_type_model - - -def create_global_tags(tags): - global global_tags - for tag in tags: - if tag not in global_tags: - try: - netbox.extras.create_tag(tag, slugify(tag)) - except: - print(tag) - global_tags.add(tag) - -def createDeviceAtLocationInRack(device_name, face, start_height, device_role, manufacturer, device_type_model, site_name, rack_name, asset_no, racktables_device_id): - global global_devices - global global_names - global global_device_roles - global global_manufacturers - global global_device_types - global asset_tags - - name_at_location = None - id_at_location = None - - for device in global_devices: - if face == device['face']['value'] and start_height == device['position'] and device_role == device['device_role']['name'] and manufacturer == device['device_type']['manufacturer']['name'] and device_type_model == device['device_type']['model'] and site_name == device['site']['name'] and rack_name == device['rack']['name']: - name_at_location = device['name'] - id_at_location = device['id'] - break - - if name_at_location == None: - # print(device_name, "being created at", rack_name, start_height, face) - name_at_location = device_name - - if device_name in global_names: - - name_counter = 1 - while True: - counter_name = device_name + ".{}".format(name_counter) - if counter_name not in global_names: - - name_at_location = counter_name - break - - else: - name_counter += 1 - - # Check if the device is in a VM cluster and if so add it to that when creating it in Netbox - device_in_vm_cluster, device_vm_cluster_name, parent_entity_ids = device_is_in_cluster(racktables_device_id) - custom_fields = get_custom_fields(cursor, racktables_device_id) - serial = serials[racktables_device_id] if racktables_device_id in serials else "" - - asset_no = asset_no.strip() if asset_no else None - if asset_no and asset_no in asset_tags: - asset_no = asset_no+ "-1" - - device = netbox.dcim.create_device(custom_fields=custom_fields,face=face,cluster={"name":device_vm_cluster_name} if device_in_vm_cluster else None,asset_tag=asset_no,serial=serial,position=start_height,name=name_at_location,device_role=device_role,manufacturer={"name":manufacturer},device_type=device_type_model,site_name=site_name,rack={"name":rack_name}) - asset_tags.add(asset_no) - - id_at_location = device['id'] - - global_names.add(name_at_location) - global_devices.append(device) - - else: - print(name_at_location, "exists at location") - - return name_at_location, id_at_location - -# Pass the list of atoms into this and have the devices built to the appropriate size -def createObjectsInRackFromAtoms(cursor, atoms, rack_name, rack_id): - - debug_splits = False - - global global_physical_object_ids - - # Put positions into dict based on Id - atoms_dict = {} - for atom in atoms: - key = str(atom[4]) - if key not in atoms_dict: - atoms_dict[key] = [atom] - else: - atoms_dict[key].append(atom) - - # Some of the same devices might exist, but not be attached: - # For example: [(1373, 18, 'rear', 'T', 1071), (1373, 19, 'rear', 'T', 1071), (1373, 35, 'front', 'T', 1071), (1373, 36, 'front', 'T', 1071)] - # Should be two separate items because they do not touch - # Iterate over the list and separate it at points where the objects do not meet. - # Because the original was dict, add a dummy value to the end of the Id key and disregard that for gettign the real id - - added_atom_objects = {} - separated_Ids = False - - for Id in atoms_dict: - current_counter = 0 - old_counter = 0 - max_counter = len(atoms_dict[Id]) - 1 - current_hash_addition = first_ascii_character # The value to add onto the Id. Make sure this stays as 1 character and increment as ASCII - current_atom = atoms_dict[Id][0][2] - current_height = atoms_dict[Id][0][1] - # When separating the Ids, make sure to remove the original Id from the atoms_dict - internal_separated_Ids = False - - # There could be a single item at the end of a list like: - # [(1379, 5, 'front', 'T', 1070), (1379, 6, 'front', 'T', 1070), (1379, 9, 'front', 'T', 1070), (1379, 10, 'front', 'T', 1070), (1379, 35, 'front', 'T', 1070)] - # Where the final list adds things before, but not itself, so add everything after the last_added - - # Iterate over a copy of atoms_dict[Id] list of atoms so that the original lsit can have items removed to use 0 as starting place and not keep track of it - for atom in atoms_dict[Id].copy(): - - # Cases without overlap, where a split should be made - # [1] [1] [ ] - # [ ] [1] [1] # Disregard this case because it doesn't appear to come up and is too much to calculate horizantal or vertical - # [ ] [ ] [ ] - - # [1] [1] [ ] [1] [ ] [ ] - # [ ] [ ] [ ] or [ ] [ ] [1] # Check for separation of heights here - # [ ] [1] [1] [ ] [ ] [ ] - - if debug_splits: - print(atom[1], current_height) - - # Look for device on a height above the last device - # Once found a split based on the last - if atom[1] > current_height + 1 and current_counter > 0: # or (internal_separated_Ids == True and current_counter == max_counter): - # Create separate Id for all the atoms in this list before the current one - - if debug_splits: - print(atoms_dict[Id], current_counter, old_counter) - - # Resize the original atoms_dict to remove the first atoms - added_atom_objects[Id + current_hash_addition] = atoms_dict[Id][old_counter:current_counter] - - if debug_splits: - print("after", added_atom_objects[Id + current_hash_addition]) - print(current_counter == max_counter) - - - # Inc hash addition. NO CHECK FOR OVERFLOW, although 32 to 126 should be good for one rack of ids - current_hash_addition = str(chr(ord(current_hash_addition) + 1)) - - internal_separated_Ids = True - separated_Ids = True - old_counter = current_counter - - #Calculate the current position and determine if it touches the last position in the ordered list. - current_atom = atom[2] - current_height = atom[1] - current_counter += 1 - - # Add the last few items - if internal_separated_Ids == True: - added_atom_objects[Id + current_hash_addition] = atoms_dict[Id][old_counter:] - # print(added_atom_objects[Id + current_hash_addition]) - - # Add all the key,value pairs from added_atom_objects to the original atoms_dict and then remove the original Ids - if separated_Ids == True: - - # Add the new Ids atoms lists with the hash addition to the original atoms_dict - for Id_and_addition in added_atom_objects: - atoms_dict[Id_and_addition] = added_atom_objects[Id_and_addition] - - # Remove the original ids from atoms_dict since the value list should now be blank - for Id_and_addition in added_atom_objects: - - original_Id = Id_and_addition[:-1] - if original_Id in atoms_dict: - atoms_dict.pop(original_Id) - - if debug_splits: - print(added_atom_objects) - print("separated", atoms_dict) - - # Any other Ids that did not get an added character now get first_ascii_character added to them - remove_original_Ids = [] - add_new_Ids = {} - for Id in atoms_dict: - if Id not in added_atom_objects: - add_new_Ids[Id + first_ascii_character] = atoms_dict[Id] - remove_original_Ids.append(Id) - - for Id in add_new_Ids: - atoms_dict[Id] = add_new_Ids[Id] - - # Remove the original Ids without the hash addition to the atoms_dict - for Id in remove_original_Ids: - atoms_dict.pop(Id) - - # Start to calculate sizes and add devices - for Id in atoms_dict: - - # Cut off the extra character added to distinguish the same device in multiple locations in a rack - - start_height = min([atom[1] for atom in atoms_dict[Id]]) - height = max([atom[1] for atom in atoms_dict[Id]]) - start_height + 1 - - # Should this be == str or startswith if there are multiple reservation splits? - if Id == str(None) + first_ascii_character: - try: - units = list(range(start_height, start_height+height)) - - print("Reservation") - netbox.dcim.create_reservation(rack_num=rack_id,units=units,description=".",user='admin') - - except Exception as e: - print(str(e)) - - continue - - real_id = int(Id[:-1]) - - cursor.execute("SELECT id,name,label,objtype_id,has_problems,comment,asset_no FROM Object WHERE id={};".format(real_id)) - info = cursor.fetchall()[0] - objtype_id = info[3] - device_name = info[1] - asset_no = info[-1] - - device_tags = getTags(cursor, "object", real_id) - - # Whether front only, rear only, or both - if 'rear' not in [atom[2] for atom in atoms_dict[Id]]: - face = 'front' - is_full_depth = False - elif 'front' not in [atom[2] for atom in atoms_dict[Id]]: - face = 'rear' - is_full_depth = False - else: - # face = 'both' - # There is no 'both' in netbox, so use 'front' instead - face = 'front' - is_full_depth = True - - manufacturer, device_role, device_type_model = get_manufacturer_role_type(cursor, real_id, objtype_id, height, is_full_depth) - - if device_role not in global_device_roles: - netbox.dcim.create_device_role(device_role,"ffffff",slugify(device_role)) - global_device_roles.add(device_role) - - if manufacturer not in global_manufacturers: - netbox.dcim.create_manufacturer(manufacturer, slugify(manufacturer)) - global_manufacturers.add(manufacturer) - - # Create a device type that takes into account the height - # If the device is a "Server Chassis", objtype_id 1502, create it as a parent device to assign children to in device bays - if objtype_id in parent_objtype_ids: - device_type_model += "-parent" - - # Cannot easily check device_types, so must use a try: except: here - if device_type_model not in global_device_types: - netbox.dcim.create_device_type(model=device_type_model,manufacturer={"name":manufacturer},slug=slugify(device_type_model),u_height=height,is_full_depth=is_full_depth,tags=device_tags,subdevice_role="parent" if objtype_id in parent_objtype_ids else "") - global_device_types.add(device_type_model) - - # Naming check done first, then check for existance in specific slot since lots of a racks have many devices of the same name, which is not allowed in netbox, even accross racks, sites, etc - - # Try to create a device at specific location. - # Function looks for the location to be open, then tries different names since device names must be unique - device_name, device_id = createDeviceAtLocationInRack(device_name=device_name, face=face, start_height=start_height, device_role=device_role, manufacturer=manufacturer, device_type_model=device_type_model,site_name= site_name,rack_name=rack_name, asset_no=asset_no, racktables_device_id=real_id) - - # Store all the device object_ids and names in the rack to later create the interfaces and ports - global_physical_object_ids.add((device_name, info[0], device_id, objtype_id)) - -# Necessary to split get_interfaces() calls because the current 50,000 interfaces fails to ever return -def get_interfaces(): - - interfaces = [] - interfaces_file = "interfaces" - - limit = 500 - offset = 0 - - # Uncomment this if created interfaces successfully previously and have their data in the file - # or get_interfaces_custom was not added (likely) and you are only running the script once without error - return pickleLoad(interfaces_file, []) - - while True: - # In netbox-python dcim.py I defined this as: Some issue with setting limit and offset made it necessary - # def get_interfaces_custom(self, limit, offset, **kwargs): - # return self.netbox_con.get('/dcim/interfaces', limit=limit, offset=offset, **kwargs) - ret = netbox.dcim.get_interfaces_custom(limit=limit, offset=offset) - if ret: - interfaces.extend(ret) - offset += limit - print("Added {} interfaces, total {}".format(limit, len(interfaces))) - else: - pickleDump(interfaces_file, interfaces) - return interfaces - -def device_is_in_cluster(device_id): - cursor.execute("SELECT parent_entity_id FROM EntityLink WHERE parent_entity_type=\"object\" AND child_entity_id={};".format(device_id)) - parent_entity_ids = [parent_entity_id[0] for parent_entity_id in cursor.fetchall()] - - for parent_entity_id in parent_entity_ids: - cursor.execute("SELECT objtype_id,name FROM Object WHERE id={};".format(parent_entity_id)) - parent_objtype_id,parent_name = cursor.fetchall()[0] - - if parent_objtype_id == 1505: - return True, parent_name, parent_entity_ids - - return False, None, parent_entity_ids - -def get_custom_fields(cursor, racktables_object_id, initial_dict=None): - - global slugified_attributes - custom_fields = initial_dict if initial_dict else dict() - - cursor.execute("SELECT attr_id,string_value,uint_value FROM AttributeValue WHERE object_id={};".format(racktables_object_id)) - attributes = cursor.fetchall() - - for attr_id,string_value,uint_value in attributes: - - # Skip the HW Type because this is used for the type and height and "Serial Tag" - if attr_id == 2 or attr_id == 27 or attr_id == 10014: - continue - - custom_fields[slugified_attributes[attr_id]] = string_value if string_value else uint_value - - return custom_fields - -# Create the device in this list and return those that could not be created because the parent did not exist yet -def create_parent_child_devices(cursor, data, objtype_id): - - global global_non_physical_object_ids - - existing_site_names = set(site['name'] for site in netbox.dcim.get_sites()) - existing_device_roles = set(device_role['name'] for device_role in netbox.dcim.get_device_roles()) - existing_manufacturers = set(manufacturer['name'] for manufacturer in netbox.dcim.get_manufacturers()) - existing_device_types = set(device_type['model'] for device_type in netbox.dcim.get_device_types()) - existing_device_names = set(device['name'].strip() for device in netbox.dcim.get_devices() if device['name']) - - # Map netbox parent device name to the names of its device bays - existing_device_bays = dict() - - for device_bay in netbox.dcim.get_device_bays(): - parent_name = device_bay['device']['name'] - - if parent_name not in existing_device_bays: - existing_device_bays[parent_name] = set() - - existing_device_bays[parent_name].add(device_bay['name']) - - not_created_parents = [] - for racktables_device_id,object_name,label,asset_no,comment in data: - - # Used for a child device whose parent isn't yet created and needs to be skipped - not_created_parent = False - - # Some names in racktables have trailing or leading spaces - if not object_name: - print("No name for", racktables_device_id,object_name,label,asset_no,comment) - continue - - object_name = object_name.strip() - if object_name not in existing_device_names: - # Create a "None" site, device type, role, manufacturer and finally device for this loose object - site_name = "None" - - manufacturer, device_role, device_type_model = get_manufacturer_role_type(cursor, racktables_device_id, objtype_id, 0, False) - - # print("Starting {}".format(object_name)) - - if site_name not in existing_site_names: - netbox.dcim.create_site(site_name, slugify(site_name)) - existing_site_names.add(site_name) - print("Added non rack site", site_name) - - if device_role not in existing_device_roles: - netbox.dcim.create_device_role(device_role,"ffffff",slugify(device_role)) - existing_device_roles.add(device_role) - print("Added non rack device role", device_role) - - if manufacturer not in existing_manufacturers: - netbox.dcim.create_manufacturer(manufacturer, slugify(manufacturer)) - existing_manufacturers.add(manufacturer) - print("Added non rack manufacturer", manufacturer) - - is_child = False - is_child_parent_id = None - is_child_parent_name = None - - is_parent = False - - # Check if the device is in a VM cluster and if so add it to that when creating it in Netbox - device_in_vm_cluster, device_vm_cluster_name, parent_entity_ids = device_is_in_cluster(racktables_device_id) - - # Check if it is a child device that needs to be created with a child device type then created marked as mounted inside a parent device's device bay. - # The parent device might not exist yet, in which case it is skipped and retried after - for parent_from_pairs_objtype_id, child_from_pairs_objtype_id in parent_child_objtype_id_pairs: - - # Server that might reside in a server chassis - if objtype_id == child_from_pairs_objtype_id: - - # Got a parent id, so check that it is a Server Chassis and if so, create the device type with child and later add a device bay to that parent object with this newly created child Server object - for parent_entity_id in parent_entity_ids: - - cursor.execute("SELECT objtype_id,name FROM Object WHERE id={};".format(parent_entity_id)) - parent_objtype_id,parent_name = cursor.fetchall()[0] - - if parent_objtype_id == parent_from_pairs_objtype_id: - - parent_name = parent_name.strip() - is_child_parent_id = netbox.dcim.get_devices(name=parent_name) - - # The parent is not yet created, so break creating this device and come back later - if not is_child_parent_id: - not_created_parents.append((racktables_device_id,object_name,label,asset_no,comment)) - not_created_parent = True - break - else: - is_child_parent_id = is_child_parent_id[0]['id'] - - is_child_parent_name = parent_name - is_child = True - # print("{} child".format(object_name)) - break - - if is_child: - break - - # Could be a loose patch panel that has child devices - if objtype_id == parent_from_pairs_objtype_id and not not_created_parent: - cursor.execute("SELECT child_entity_id FROM EntityLink WHERE parent_entity_type=\"object\" AND parent_entity_id={};".format(racktables_device_id)) - child_entity_ids = cursor.fetchall() - - # print(child_entity_ids) - - for child_entity_id in [x[0] for x in child_entity_ids]: - cursor.execute("SELECT objtype_id,name FROM Object WHERE id={};".format(child_entity_id)) - child_objtype_id,child_name = cursor.fetchall()[0] - - # print(child_objtype_id, child_name) - - if child_objtype_id == child_from_pairs_objtype_id: - is_parent = True - # print("{} parent".format(object_name)) - break - - if is_parent: - break - - # Continue to next device, skipping the child with no parent yet - if not_created_parent: - continue - - subdevice_role = "" - - if is_child: - device_type_model += "-child" - subdevice_role = "child" - - if is_parent: - device_type_model += "-parent" - subdevice_role = "parent" - - if device_type_model not in existing_device_types: - - netbox.dcim.create_device_type(model=device_type_model,slug=slugify(device_type_model), manufacturer={"name":manufacturer},u_height=0,subdevice_role=subdevice_role) - existing_device_types.add(device_type_model) - - device_tags = getTags(cursor = cursor, entity_realm="object", entity_id = racktables_device_id) - custom_fields = get_custom_fields(cursor, racktables_device_id, {"Device_Label": label}) - serial = serials[racktables_device_id] if racktables_device_id in serials else "" - - asset_no = asset_no.strip() if asset_no else None - if asset_no and asset_no in asset_tags: - asset_no = asset_no+ "-1" - - # print("Creating device \"{}\"".format(object_name), device_type_model, device_role, manufacturer, site_name, asset_no) - added_device = netbox.dcim.create_device(name=object_name,cluster={"name": device_vm_cluster_name} if device_in_vm_cluster else None,asset_tag=asset_no, serial=serial,custom_fields=custom_fields, device_type=device_type_model, device_role=device_role, site_name=site_name,comment=comment[:200] if comment else "",tags=device_tags) - asset_tags.add(asset_no) - - # Later used for creating interfaces - global_non_physical_object_ids.add((object_name, racktables_device_id, added_device['id'], objtype_id)) - - # If device was a child device mounted inside a physically mounted parent device, then create a device bay relating to the parent device filled with the just created item - # Only one device can be assigned to each bay, so find the first open device bay name for the parent device, then use try and except to add the added device to it, although it should not fail since the child device was just created above - if is_child: - - # Check that this parent object currently has any device bays in it, - if is_child_parent_name in existing_device_bays: - new_bay_name = "bay-" + str(max([int(device_bay_name[len("bay-"):]) for device_bay_name in existing_device_bays[is_child_parent_name]]) + 1) - else: - existing_device_bays[is_child_parent_name] = set() - new_bay_name = "bay-1" - - # print(new_bay_name, is_child_parent_name) - existing_device_bays[is_child_parent_name].add(new_bay_name) - - netbox.dcim.create_device_bay(new_bay_name, device_id=is_child_parent_id, installed_device_id=added_device['id']) - - return not_created_parents - -def change_interface_name(interface_name, objtype_id): - interface_name = interface_name.strip() - - global interface_name_mappings - - if objtype_id in (7, 8): - for prefix in interface_name_mappings: - # Make sure the prefix is followed by a number so Etherent doesn't become Etherneternet - if interface_name.startswith(prefix) and len(interface_name) > len(prefix) and interface_name[len(prefix)] in "0123456789- ": - new_interface_name = interface_name.replace(prefix, interface_name_mappings[prefix], 1) - - # with open("prefixes", "a") as file: - # file.write("{} => {}\n".format(interface_name, new_interface_name)) - - interface_name = new_interface_name - - return interface_name - -with connection.cursor() as cursor: - - # For the HW Type field: use this as the base name for the device type - - cursor.execute("SELECT object_id,string_value FROM AttributeValue WHERE attr_id=10014") - for object_id,string_value in cursor.fetchall(): - serials[object_id] = string_value if string_value else "" - - # Turn the uint_value for attr_id 2 in table "AttributeValue" into a string from the table "Dictionary" - cursor.execute("SELECT dict_key,dict_value FROM Dictionary") - for dict_key,dict_value in cursor.fetchall(): - hw_types[dict_key] = dict_value.strip("[]").split("|")[0].strip().replace("%"," ") - - # Map the racktables id to the name to add to custom fields later - cursor.execute("SELECT id,type,name FROM Attribute") - yellow_attributes = cursor.fetchall() - for Id,Type,name in yellow_attributes: - slugified_attributes[Id] = name.replace(" ","_").replace("#","").replace(",","").replace("/","").replace(".","").strip("_") - -# print("""{}: -# type: {} -# required: false -# weight: 0 -# on_objects: -# - dcim.models.Device""".format(slugified_attributes[Id], {"string": "text", "uint":"integer", "date":"text", "float":"integer","dict":"text"}[Type])) - -# print("\n\nPaste that in the intializers/custom_fields.yml file for this program to work!") - print("Make sure to also set the page limit to 0 in the conf.env file") - - # Create all the tags - global_tags = set(tag['name'] for tag in netbox.extras.get_tags()) - - IPV4_TAG = "IPv4" - IPV6_TAG = "IPv6" - - create_global_tags((IPV6_TAG, IPV4_TAG)) - cursor.execute("SELECT tag FROM TagTree;") - create_global_tags(tag[0] for tag in cursor.fetchall()) - - print("Created tags") - - - # Map the vlan id domain to the name - vlan_domain_id_names = dict() - - existing_vlan_groups = set() - - for vlan_group in netbox.ipam.get_vlan_groups(): - existing_vlan_groups.add(vlan_group['name']) - - if CREATE_VLAN_GROUPS: - print("Creating VLAN Groups") - cursor.execute("SELECT id,description FROM VLANDomain") - vlans_domains = cursor.fetchall() - for Id, description in vlans_domains: - - vlan_domain_id_names[Id] = description - - if description not in existing_vlan_groups: - netbox.ipam.create_vlan_group(name=description, slug=slugify(description), custom_fields= {"VLAN_Domain_ID":Id}) - existing_vlan_groups.add(description) - - # Map the racktables network id to the vlan group and vlan names - network_id_group_name_id = pickleLoad('network_id_group_name_id', dict()) - - if CREATE_VLANS: - print("Creating VLANs") - - vlans_for_group = dict() - for IP in ("4", "6"): - cursor.execute("SELECT domain_id,vlan_id,ipv{}net_id FROM VLANIPv{}".format(IP, IP)) - vlans = cursor.fetchall() - for domain_id,vlan_id,net_id in vlans: - - cursor.execute("SELECT vlan_descr FROM VLANDescription WHERE domain_id={} AND vlan_id={}".format(domain_id,vlan_id)) - vlan_name = cursor.fetchall()[0][0] - - if not vlan_name: - continue - - vlan_group_name = vlan_domain_id_names[domain_id] - if vlan_group_name not in vlans_for_group: - vlans_for_group[vlan_group_name] = set() - - name = vlan_name - # Need to get a unique name for the vlan_name if it is already in this group - if name in vlans_for_group[vlan_group_name]: - counter = 1 - while True: - name = vlan_name+"-"+str(counter) - if name not in vlans_for_group[vlan_group_name]: - break - else: - counter += 1 - - - # print(vlan_group_name, vlan_id, name) - try: - created_vlan = netbox.ipam.create_vlan(group={"name":vlan_group_name},vid=vlan_id,vlan_name=name) - network_id_group_name_id[net_id] = (vlan_group_name, vlan_name, created_vlan['id']) - # print("created", vlan_group_name,vlan_id,name) - - except: - print(vlan_group_name,vlan_id,name) - print("Something went wrong here\n\n") - - vlans_for_group[vlan_group_name].add(name) - - pickleDump('network_id_group_name_id', network_id_group_name_id) - - - print("About to create Clusters and VMs") - if CREATE_MOUNTED_VMS: - # Create VM Clusters and the VMs that exist in them - existing_cluster_types = set(cluster_type['name'] for cluster_type in netbox.virtualization.get_cluster_types()) - existing_cluster_names = set(cluster['name'] for cluster in netbox.virtualization.get_clusters()) - existing_virtual_machines = set(virtual_machine['name'] for virtual_machine in netbox.virtualization.get_virtual_machines()) - - # print("Got {} existing virtual machines".format(len(existing_virtual_machines))) - - vm_counter = 0 - cursor.execute("SELECT id,name,asset_no,label FROM Object WHERE objtype_id=1505;") - clusters = cursor.fetchall() - for Id, cluster_name, asset_no,label in clusters: - - if cluster_name not in existing_cluster_types: - netbox.virtualization.create_cluster_type(cluster_name, slugify(cluster_name)) - existing_cluster_types.add(cluster_name) - - if cluster_name not in existing_cluster_names: - netbox.virtualization.create_cluster(cluster_name, cluster_name) - existing_cluster_names.add(cluster_name) - - # Create all the VMs that exist in this cluster and assign them to this cluster - cursor.execute("SELECT child_entity_type,child_entity_id FROM EntityLink WHERE parent_entity_id={};".format(Id)) - child_virtual_machines = cursor.fetchall() - for child_entity_type,child_entity_id in child_virtual_machines: - - cursor.execute("SELECT name,label,comment,objtype_id,asset_no FROM Object WHERE id={};".format(child_entity_id)) - virtual_machine_name, virtual_machine_label, virtual_machine_comment, virtual_machine_objtype_id,virtual_machine_asset_no = cursor.fetchall()[0] - - # Confirm that the child is VM and not a server or other to not create duplicates - if virtual_machine_objtype_id != 1504 or not virtual_machine_name: - continue - - virtual_machine_name = virtual_machine_name.strip() - - if virtual_machine_name not in existing_virtual_machines: - virtual_machine_tags = getTags(cursor, "object", child_entity_id) - - netbox.virtualization.create_virtual_machine(virtual_machine_name, cluster_name, tags=virtual_machine_tags, comments=virtual_machine_comment[:200] if virtual_machine_comment else "",custom_fields= {"VM_Label": virtual_machine_label[:200] if virtual_machine_label else "", "VM_Asset_No": virtual_machine_asset_no if virtual_machine_asset_no else ""}) - existing_virtual_machines.add(virtual_machine_name) - # print("Created", virtual_machine_name) - - else: - # print(virtual_machine_name, "exists") - pass - - vm_counter += 1 - # print(virtual_machine_name, vm_counter) - - if CREATE_UNMOUNTED_VMS: - print("Creating unmounted VMs") - - # Create the VMs that are not in clusters - unmounted_cluster_name = "Unmounted Cluster" - if unmounted_cluster_name not in existing_cluster_types: - netbox.virtualization.create_cluster_type(unmounted_cluster_name, slugify(unmounted_cluster_name)) - - if unmounted_cluster_name not in existing_cluster_names: - netbox.virtualization.create_cluster(unmounted_cluster_name, unmounted_cluster_name) - - cursor.execute("SELECT name,label,comment,objtype_id,asset_no FROM Object WHERE objtype_id=1504;") - vms = cursor.fetchall() - for virtual_machine_name, virtual_machine_label, virtual_machine_comment, virtual_machine_objtype_id, virtual_machine_asset_no in vms: - - if virtual_machine_objtype_id != 1504 or not virtual_machine_name: - continue - - virtual_machine_name = virtual_machine_name.strip() - - if virtual_machine_name not in existing_virtual_machines: - virtual_machine_tags = getTags(cursor, "object", child_entity_id) - - netbox.virtualization.create_virtual_machine(virtual_machine_name, unmounted_cluster_name, tags=virtual_machine_tags, comments=virtual_machine_comment[:200] if virtual_machine_comment else "", custom_fields={"VM_Label": virtual_machine_label[:200] if virtual_machine_label else "", "VM_Asset_No": virtual_machine_asset_no if virtual_machine_asset_no else ""}) - existing_virtual_machines.add(virtual_machine_name) - - else: - # print(virtual_machine_name, "exists") - pass - - vm_counter += 1 - - - # Map interface integer type to the string type - cursor.execute("SELECT id,oif_name FROM PortOuterInterface;") - PortOuterInterfaces = dict() - for k,v in cursor.fetchall(): - PortOuterInterfaces[k] = v - - # Fill racks with physical devices - if CREATE_RACKED_DEVICES: - - print("Creating sites, racks, and filling rack space") - - - global_devices = netbox.dcim.get_devices() - print("Got {} devices".format(len(global_devices))) - - global_names = set(device['name'] for device in global_devices) - print("Got {} names".format(len(global_names))) - - global_manufacturers = set(manufacturer['name'] for manufacturer in netbox.dcim.get_manufacturers()) - print("Got {} manufacturers".format(len(global_manufacturers))) - - global_device_roles = set(device_role['name'] for device_role in netbox.dcim.get_device_roles()) - print("Got {} device roles".format(len(global_device_roles))) - - global_device_types = set(device_type['model'] for device_type in netbox.dcim.get_device_types()) - print("Got {} device types".format(len(global_device_types))) - - cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE objtype_id=1562") - sites = cursor.fetchall() - for site_id, site_name, site_label, site_asset_no, site_comment in sites: - - if not netbox.dcim.get_sites(name=site_name) or True: - - if len(site_name) > SITE_NAME_LENGTH_THRESHOLD: - print("This is probably a location (address)", site_name) - try: - # Create location - pass - except: - # Location exists - pass - - continue - - print("Creating site (datacenter)", site_name,"\n") - - try: - netbox.dcim.create_site(site_name, slugify(site_name)) - except: - print("Failed to create site", site_name) - pass - - for row_id, row_name, row_label, row_asset_no, row_comment in getRowsAtSite(cursor, site_id): - for rack_id, rack_name, rack_label, rack_asset_no, rack_comment in getRacksAtRow(cursor,row_id): - # Get rack height from table AttributeValue where attr_id=27, object_id is rack, uint_value is the height - rack_tags = getTags(cursor, "rack", rack_id) - rack_height = getRackHeight(cursor, rack_id) - - atoms = getAtomsAtRack(cursor, rack_id) - - # Make sure rack name does not already contain row - if not rack_name.startswith(row_name.rstrip(".") + "."): - rack_name = site_name + "." + row_name + "." + rack_name - else: - rack_name = site_name + "." + rack_name - - # Racks do NOT require a unique name, but they are given one by this script. - # Otherwise get_racks() based on name only would be wrong to use - rack = netbox.dcim.create_rack(name=rack_name,comment=rack_comment[:200] if rack_comment else "",site_name=site_name,u_height=rack_height,tags=rack_tags) - - createObjectsInRackFromAtoms(cursor, atoms, rack_name, rack['id']) - - pickleDump("global_physical_object_ids", global_physical_object_ids) - - # Get all the object_id from table Rackspace for the object_id in the table Port for the name and id - # Use type as id to get oif_name in table PortOuterInterface - # Use id as porta or portb in Link table to get the parent/linked object - - else: - global_physical_object_ids = pickleLoad("global_physical_object_ids", set()) - - # Create non racked device, some of which required the physical devices above as parents - print("\n\nAbout to create non racked devices") - - # Load from file and later save new additions to file. This avoids querying netbox for a racktables device_id which it does not store - global_non_physical_object_ids = pickleLoad("global_non_physical_object_ids", set()) - - if CREATE_NON_RACKED_DEVICES: - - # Map netbox parent device name to the names of its device bays - existing_device_bays = dict() - - for device_bay in netbox.dcim.get_device_bays(): - parent_name = device_bay['device']['name'] - - if parent_name not in existing_device_bays: - existing_device_bays[parent_name] = set() - - existing_device_bays[parent_name].add(device_bay['name']) - - for objtype_id in objtype_id_names: - print("\n\nobjtype_id {} {}\n\n".format(objtype_id, objtype_id_names[objtype_id])) - - # Get all objects of that objtype_id and try to create them if they do not exist - cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE objtype_id={}".format(objtype_id)) - objs = cursor.fetchall() - children_without_parents = create_parent_child_devices(cursor, objs, objtype_id) - - # Try to recreate the children devices that didn't have parents before - if children_without_parents: - create_parent_child_devices(cursor, children_without_parents, objtype_id) - - pickleDump("global_non_physical_object_ids", global_non_physical_object_ids) - - # Names for device to see if that interface already exists for the device with fast lookup - interface_local_names_for_device = dict() - - # netbox Id of the interface mapped to - interface_netbox_ids_for_device = dict() - - # This will probably take a while for about 50,000 physical interfaces - if CREATE_INTERFACES: - print("Getting interfaces") - start_time = time.time() - - for value in get_interfaces(): - racktables_device_id = value['device']['id'] - - if racktables_device_id not in interface_local_names_for_device: - interface_local_names_for_device[racktables_device_id] = set() - - interface_local_names_for_device[racktables_device_id].add(value['name']) - - if racktables_device_id not in interface_netbox_ids_for_device: - interface_netbox_ids_for_device[racktables_device_id] = dict() - - interface_netbox_ids_for_device[racktables_device_id][value['name']] = value['id'] - - print("Got {} interfaces in {} seconds".format(sum(len(interface_local_names_for_device[device_id]) for device_id in interface_local_names_for_device), time.time() - start_time)) - - # Store the SQL id and the netbox interface id to later create the connection between the two from the Link table - connection_ids = dict() - - interface_counter = 0 - print("Creating interfaces for devices") - for device_list in (global_physical_object_ids, global_non_physical_object_ids): - for device_name, racktables_object_id, netbox_id, objtype_id in device_list: - - # print(device_name, racktables_object_id, netbox_id) - - cursor.execute("SELECT id,name,iif_id,type,label FROM Port WHERE object_id={}".format(racktables_object_id)) - ports = cursor.fetchall() - - if netbox_id not in interface_local_names_for_device: - interface_local_names_for_device[netbox_id] = set() - - if netbox_id not in interface_netbox_ids_for_device: - interface_netbox_ids_for_device[netbox_id] = dict() - - for Id, interface_name, iif_if, Type, label in ports: - - PortOuterInterface = PortOuterInterfaces[Type] - - if interface_name: - interface_name = change_interface_name(interface_name, objtype_id) - else: - continue - - # Create regular interface, which all things need to be to create connections accross devices - if interface_name not in interface_local_names_for_device[netbox_id]: - - if not interface_name: - print("No interface_name", Id,"\n\n\n") - continue - if not netbox_id: - print("No netbox_id", netbox_id,"\n\n\n") - continue - if not PortOuterInterface: - print("No PortOuterInterface", PortOuterInterface,"\n\n\n") - continue - - added_interface = netbox.dcim.create_interface(name=interface_name, interface_type="other", device_id=netbox_id, custom_fields= {"Device_Interface_Type": PortOuterInterface}, label=label[:200] if label else "") - - interface_local_names_for_device[netbox_id].add(interface_name) - interface_netbox_ids_for_device[netbox_id][interface_name] = added_interface['id'] - - # Link racktables interface id to netbox interface id - connection_ids[Id] = added_interface['id'] - - else: - print(Id, interface_name, "exists") - - # Link racktables interface id to netbox interface id based on the local name. - connection_ids[Id] = interface_netbox_ids_for_device[netbox_id][interface_name] - - - interface_counter += 1 - if interface_counter % 500 == 0: - print("Created {} interfaces".format(interface_counter)) - - pickleDump('connection_ids', connection_ids) - - - # The "interfaces" created from the IP addresses below don't need connections made because they are "IP Addresses" in RT, whereas connections are made for "ports and links" which was done before - - # Create interface connections - if CREATE_INTERFACE_CONNECTIONS: - print("Creating interface connections") - connection_ids = pickleLoad('connection_ids', dict()) - - # Create the interface connections based on racktable's Link table's storage of - cursor.execute("SELECT porta,portb,cable FROM Link") - connections = cursor.fetchall() - - for interface_a, interface_b, cable in connections: - # These error are fixed by including more objtype_ids in the global list for non racked devices - if interface_a not in connection_ids: - print("ERROR", interface_a, "a not in") - continue - - if interface_b not in connection_ids: - print("ERROR", interface_b, "b not in") - continue - - netbox_id_a = connection_ids[interface_a] - netbox_id_b = connection_ids[interface_b] - - try: - netbox.dcim.create_interface_connection(netbox_id_a, netbox_id_b, 'dcim.interface', 'dcim.interface') - except: - error_log("Interface connection error {} {}".format(netbox_id_a, netbox_id_b)) - - - device_names = dict() - cursor.execute("SELECT id,name FROM Object") - for Id,device_name in cursor.fetchall(): - if not device_name: - continue - device_names[Id] = device_name.strip() - - existing_prefixes = set(prefix['prefix'] for prefix in netbox.ipam.get_ip_prefixes()) - existing_ips = set(prefix['address'] for prefix in netbox.ipam.get_ip_addresses()) - - - versions = [] - if CREATE_IPV4: - versions.append("4") - if CREATE_IPV6: - versions.append("6") - - for IP in versions: - print("\n\nCreating IPv{}s Networks\n\n".format(IP)) - cursor.execute("SELECT id,ip,mask,name,comment FROM IPv{}Network".format(IP)) - ipv46Networks = cursor.fetchall() - - for Id,ip,mask,prefix_name,comment in ipv46Networks if CREATE_IP_NETWORKS else []: - - # Skip the single IP addresses - if (IP == "4" and mask == 32) or (IP == "6" and mask == 128): - continue - - prefix = str(ipaddress.ip_address(ip)) + "/" + str(mask) - - if prefix in existing_prefixes: - continue - - if Id in network_id_group_name_id: - vlan_name = network_id_group_name_id[Id][1] - vlan_id = network_id_group_name_id[Id][2] - else: - vlan_name = None - - tags = getTags(cursor, "ipv{}net".format(IP), Id) - - # print("Creaing {} {} in vlan {}".format(prefix, prefix_name, vlan_name)) - - # Description takes at most 200 characters - netbox.ipam.create_ip_prefix(vlan={"id":vlan_id} if vlan_name else None,prefix=prefix,description=comment[:200] if comment else "",custom_fields={'Prefix_Name': prefix_name},tags = [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}] + tags) - - - print("Creating IPv{} Addresses".format(IP)) - cursor.execute("SELECT ip,name,comment FROM IPv{}Address".format(IP)) - ip_addresses = cursor.fetchall() - ip_names_comments = dict([(ip, (name, comment)) for ip,name,comment in ip_addresses]) - # print(ip_names_comments) - - # These IPs are the ones allocated to devices, not ones that are only reserved - cursor.execute("SELECT ALO.object_id,ALO.ip,ALO.name,ALO.type,OBJ.objtype_id,OBJ.name FROM IPv{}Allocation ALO, Object OBJ WHERE OBJ.id=ALO.object_id".format(IP)) - ip_allocations = cursor.fetchall() - - for object_id,ip,interface_name,ip_type,objtype_id,device_name in ip_allocations if CREATE_IP_ALLOCATED else []: - - if ip in ip_names_comments: - ip_name, comment = ip_names_comments[ip] - else: - ip_name, comment = "", "" - - if device_name: - device_name = device_name.strip() - else: - continue - - - string_ip = str(ipaddress.ip_address(ip)) + "{}".format("/32" if IP == "4" else "") - if string_ip in existing_ips and ip_type != "shared": - continue - else: - existing_ips.add(string_ip) - - use_vrrp_role = "vrrp" if ip_type == "shared" else None - - if interface_name: - interface_name = change_interface_name(interface_name.strip(), objtype_id) - else: - interface_name = "no_RT_name"+str(random.randint(0,99999)) - - - # Check through the interfaces that exist for this device in netbox, created previously - # If one exists with the same name as the IP has in racktables, add the ip to that - # Else create a dummy virtual interface with the new name and add the ip to that interface - # because Racktables allows you to give IP interfaces any name not necessarily one of the existing interfaces explicitly set as interfaces - if objtype_id == 1504: - device_or_vm = "vm" - interface_list = netbox.virtualization.get_interfaces(virtual_machine=device_name) - else: - device_or_vm = "device" - interface_list = netbox.dcim.get_interfaces(device=device_name) - - # print(device_name) - - device_contained_same_interface = False - for name,interface_id in [(interface['name'], interface['id']) for interface in interface_list]: - - if interface_name == name: - - netbox.ipam.create_ip_address(address=string_ip,role=use_vrrp_role,assigned_object={'device'if device_or_vm == "device" else "virtual_machine":device_name},interface_type="virtual",assigned_object_type="dcim.interface" if device_or_vm == "device" else "virtualization.vminterface",assigned_object_id=interface_id,description=comment[:200] if comment else "",custom_fields={'IP_Name': ip_name,'Interface_Name':interface_name,'IP_Type':ip_type},tags=[{'name': IPV4_TAG if IP == "4" else IPV6_TAG}]) - - device_contained_same_interface = True - break - - if not device_contained_same_interface: - - if device_or_vm == "device": - device_id = netbox.dcim.get_devices(name=device_name)[0]['id'] - else: - device_id = netbox.virtualization.get_virtual_machines(name=device_name)[0]['id'] - - # print("Creating dummy {} virtual interface {} for {} and {}".format(device_or_vm, interface_name, device_name, string_ip)) - # Because there is no way to access interfaces per device without querying the whole list of interfaces, do a try and except for iterating over the name - # An error would occur when there are duplicate IP interface names in RT - try: - if device_or_vm == "device": - added_interface = netbox.dcim.create_interface(name=interface_name,interface_type="virtual",device_id=device_id, custom_fields={"Device_Interface_Type": "Virtual"}) - else: - added_interface = netbox.virtualization.create_interface(name=interface_name,interface_type="virtual",virtual_machine=device_name,custom_fields={"VM_Interface_Type": "Virtual"}) - except: - # Probably had a name colision with interface_name - print("ERROR \n\n") - pass - - else: - # Make sure ip is not already on this interface? - netbox.ipam.create_ip_address(address=string_ip,role=use_vrrp_role,assigned_object_id=added_interface['id'],assigned_object={"device" if device_or_vm == "device" else "virtual_machine" :{'id': device_id}},interface_type="virtual",assigned_object_type="dcim.interface" if device_or_vm == "device" else "virtualization.vminterface",description=comment[:200] if comment else "",custom_fields={'IP_Name': ip_name, 'Interface_Name': interface_name, 'IP_Type': ip_type},tags = [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}]) - - # Add ip without any associated device - for ip in ip_names_comments if CREATE_IP_NOT_ALLOCATED else []: - string_ip = str(ipaddress.ip_address(ip)) + "{}".format("/32" if IP == "4" else "") - if string_ip in existing_ips: - continue - ip_name, comment = ip_names_comments[ip] - netbox.ipam.create_ip_address(address=string_ip,description=comment[:200] if comment else "",custom_fields={'IP_Name': ip_name},tags=[{'name': IPV4_TAG if IP == "4" else IPV6_TAG}]) - - - - - - diff --git a/migration/__init__.py b/migration/__init__.py new file mode 100644 index 0000000..aac447f --- /dev/null +++ b/migration/__init__.py @@ -0,0 +1,7 @@ +""" +Racktables to NetBox Migration Tool + +A modular Python package for migrating data from Racktables to NetBox. +""" + +__version__ = '1.0.0' diff --git a/migration/config.py b/migration/config.py new file mode 100644 index 0000000..53753a3 --- /dev/null +++ b/migration/config.py @@ -0,0 +1,158 @@ +""" +Global configuration settings for the Racktables to NetBox migration tool +""" +from pymysql.cursors import DictCursor +import os +import ipaddress + +# Migration flags - control which components are processed +CREATE_VLAN_GROUPS = True +CREATE_VLANS = True +CREATE_MOUNTED_VMS = True +CREATE_UNMOUNTED_VMS = True +CREATE_RACKED_DEVICES = True +CREATE_NON_RACKED_DEVICES = True +CREATE_INTERFACES = True +CREATE_INTERFACE_CONNECTIONS = True +CREATE_IPV4 = True +CREATE_IPV6 = True +CREATE_IP_NETWORKS = True +CREATE_IP_ALLOCATED = True +CREATE_IP_NOT_ALLOCATED = True + +# Extended migration flags +CREATE_PATCH_CABLES = True +CREATE_FILES = True +CREATE_VIRTUAL_SERVICES = True +CREATE_NAT_MAPPINGS = True +CREATE_LOAD_BALANCING = True +CREATE_MONITORING_DATA = True +CREATE_AVAILABLE_SUBNETS = False +CREATE_IP_RANGES = False + +# Site filtering - set to None to process all sites, or specify a site name to restrict migration +TARGET_SITE = None # This can be set via command line args +TARGET_SITE_ID = None # Store the numeric ID of the target site + +# Tenant filtering - set to None to process all tenants, or specify a tenant name to restrict migration +TARGET_TENANT = None # This can be set via command line args +TARGET_TENANT_ID = None # Store the UUID of the target tenant + +# Whether to store cached data with pickle +STORE_DATA = False + +# The length to exceed for a site to be considered a location (like an address) not a site +SITE_NAME_LENGTH_THRESHOLD = 10 + +# First character for separating identical devices in different spots in same rack +FIRST_ASCII_CHARACTER = " " + +# Common tags +IPV4_TAG = "IPv4" +IPV6_TAG = "IPv6" + +# NetBox API connection settings - can be overridden with environment variables +NB_HOST = os.environ.get('NETBOX_HOST', 'localhost') +NB_PORT = int(os.environ.get('NETBOX_PORT', '8000')) +NB_TOKEN = os.environ.get('NETBOX_TOKEN', '0123456789abcdef0123456789abcdef01234567') +NB_USE_SSL = os.environ.get('NETBOX_USE_SSL', 'False').lower() in ('true', '1', 'yes') + +# Database connection parameters - can be overridden with environment variables +DB_CONFIG = { + 'host': os.environ.get('RACKTABLES_DB_HOST', '10.248.48.4'), + 'port': int(os.environ.get('RACKTABLES_DB_PORT', '3306')), + 'user': os.environ.get('RACKTABLES_DB_USER', 'root'), + 'password': os.environ.get('RACKTABLES_DB_PASSWORD', 'secure-password'), + 'db': os.environ.get('RACKTABLES_DB_NAME', 'test1'), + 'charset': 'utf8mb4', + 'cursorclass': DictCursor +} + +# Maps racktables object type IDs to names +OBJTYPE_ID_NAMES = { + 1: "BlackBox", + 2: "PDU", + 3: "Shelf", + 4: "Server", + 5: "DiskArray", + 7: "Router", + 8: "Network Switch", + 9: "Patch Panel", + 10: "CableOrganizer", + 11: "spacer", + 12: "UPS", + 13: "Modem", + 15: "console", + 447: "multiplexer", + 798: "Network Security", + 1502: "Server Chassis", + 1398: "Power supply", + 1503: "Network chassis", + 1644: "serial console server", + 1787: "Management interface", + 50003: "Circuit", + 50013: "SAN", + 50044: "SBC", + 50064: "GSX", + 50065: "EMS", + 50066: "PSX", + 50067: "SGX", + 50083: "SBC SWE", + # Don't create these with the unracked devices + # 1504: "VM", + # 1505: "VM Cluster", + # 1560: "Rack", + # 1561: "Row", + # 1562: "Location", +} + +# Manufacturer strings from Racktables +RACKTABLES_MANUFACTURERS = { + 'Generic', 'Dell', 'MicroSoft', 'F5', 'ExtremeXOS', 'Netapp', 'Open Solaris', 'EMC', + 'SlackWare', 'RH', 'FreeBSD', 'Edge-Core', 'SMC', 'Force10', 'Cyclades', 'IBM', + 'Linksys', 'IronWare', 'Red', 'Promise', 'Extreme', 'QLogic', 'Marvell', 'SonicWall', + 'Foundry', 'Juniper', 'APC', 'Raritan', 'Xen', 'NEC', 'Palo', 'OpenSUSE', 'Sun', + 'noname/unknown', 'NetApp', 'VMware', 'Moxa', 'Tainet', 'SGI', 'Mellanox', 'Vyatta', + 'Raisecom', 'Gentoo', 'Brocade', 'Enterasys', 'Dell/EMC', 'VMWare', 'Infortrend', + 'OpenGear', 'Arista', 'Lantronix', 'Huawei', 'Avocent', 'SUSE', 'ALT_Linux', 'OpenBSD', + 'Nortel', 'Univention', 'JunOS', 'MikroTik', 'NetBSD', 'Cronyx', 'Aten', 'Intel', + 'PROXMOX', 'Ubuntu', 'Motorola', 'SciLin', 'Fujitsu', 'Fiberstore', '3Com', 'D-Link', + 'Allied', 'Fortigate', 'Debian', 'HP', 'NETGEAR', 'Pica8', 'TPLink', 'Fortinet', 'RAD', + 'NS-OS', 'Cisco', 'Alcatel-Lucent', 'CentOS', 'Hitachi' +} + +# Pairs of parent objtype_id, then child objtype_id +PARENT_CHILD_OBJTYPE_ID_PAIRS = ( + (1502, 4), # Server inside a Server Chassis + (9, 9), # Patch Panel inside a Patch Panel +) + +# Interface name mappings for cleanup +INTERFACE_NAME_MAPPINGS = { + "Eth": "Ethernet", + "eth": "Ethernet", + "ethernet": "Ethernet", + "Po": "Port-Channel", + "Port-channel": "Port-Channel", + "BE": "Bundle-Ether", + "Lo": "Loopback", + "Loop": "Loopback", + "Vl": "VLAN", + "Vlan": "VLAN", + "Mg": "MgmtEth", + "Se": "Serial", + "Gi": "GigabitEthernet", + "Te": "TenGigE", + "Tw": "TwentyFiveGigE", + "Fo": "FortyGigE", + "Hu": "HundredGigE", +} + +# Global data collections +PARENT_OBJTYPE_IDS = [pair[0] for pair in PARENT_CHILD_OBJTYPE_ID_PAIRS] + +# Load optional local config if exists +local_config = os.path.join(os.path.dirname(__file__), 'local_config.py') +if os.path.exists(local_config): + with open(local_config) as f: + exec(f.read()) diff --git a/migration/custom_netbox.py b/migration/custom_netbox.py new file mode 100644 index 0000000..67525db --- /dev/null +++ b/migration/custom_netbox.py @@ -0,0 +1,442 @@ +""" +This module extends the pynetbox library with custom methods needed for the migration. +It wraps the pynetbox client to provide a compatible interface with the original code. +""" + +import pynetbox + + +class NetBoxWrapper: + """ + Wrapper class that provides compatibility with the original python-netbox library + by adapting the pynetbox interface to match the expected methods and structure. + """ + + def __init__(self, host, port=None, use_ssl=True, auth_token=None): + """Initialize the NetBox API client with the given parameters""" + url = f"{'https' if use_ssl else 'http'}://{host}" + if port: + url = f"{url}:{port}" + + self.nb = pynetbox.api(url, token=auth_token) + + # Create API endpoints that match the original library structure + self.dcim = DcimWrapper(self.nb) + self.ipam = IpamWrapper(self.nb) + self.virtualization = VirtualizationWrapper(self.nb) + self.extras = ExtrasWrapper(self.nb) + self.tenancy = TenancyWrapper(self.nb) # Add tenancy wrapper + + +class DcimWrapper: + """Wrapper for DCIM endpoints""" + + def __init__(self, nb): + self.nb = nb + # Add this line to create a .racks attribute + self.racks = self.nb.dcim.racks + + def get_racks(self, **kwargs): + """Get racks with optional filters""" + return self.nb.dcim.racks.filter(**kwargs) + + def get_sites(self, **kwargs): + """Get sites with optional filters""" + return self.nb.dcim.sites.filter(**kwargs) + + def create_site(self, name, slug, **kwargs): + """Create a new site""" + return self.nb.dcim.sites.create(name=name, slug=slug, **kwargs) + + def get_devices(self, **kwargs): + """Get devices with optional filters""" + return self.nb.dcim.devices.filter(**kwargs) + + def create_device(self, name, device_type, device_role, site_name, **kwargs): + """Create a new device""" + # Handle nested attributes + if 'manufacturer' in kwargs and isinstance(kwargs['manufacturer'], dict): + kwargs['manufacturer'] = kwargs['manufacturer']['name'] + if 'rack' in kwargs and isinstance(kwargs['rack'], dict): + kwargs['rack'] = kwargs['rack']['name'] + if 'cluster' in kwargs and isinstance(kwargs['cluster'], dict): + kwargs['cluster'] = kwargs['cluster']['name'] + # Get site ID from name if needed + site = self.nb.dcim.sites.get(name=site_name) + # Get device role and type if they're strings + if isinstance(device_role, str): + device_role = self.nb.dcim.device_roles.get(name=device_role) + if isinstance(device_type, str): + device_type = self.nb.dcim.device_types.get(model=device_type) + + # Set up parameters for the call + params = { + 'name': name, + 'device_type': device_type.id if hasattr(device_type, 'id') else device_type, + 'role': device_role.id if hasattr(device_role, 'id') else device_role, + 'site': site.id if site else site_name + } + + # Add any additional keyword arguments + params.update(kwargs) + + return self.nb.dcim.devices.create(**params) + + def create_device_role(self, name, color, slug, **kwargs): + """Create a new device role""" + return self.nb.dcim.device_roles.create(name=name, color=color, slug=slug, **kwargs) + + def get_device_roles(self, **kwargs): + """Get device roles with optional filters""" + return self.nb.dcim.device_roles.filter(**kwargs) + + def create_manufacturer(self, name, slug, **kwargs): + """Create a new manufacturer""" + return self.nb.dcim.manufacturers.create(name=name, slug=slug, **kwargs) + + def get_manufacturers(self, **kwargs): + """Get manufacturers with optional filters""" + return self.nb.dcim.manufacturers.filter(**kwargs) + + def create_device_type(self, model, manufacturer, slug, **kwargs): + """Create a new device type""" + # Handle manufacturer if it's a dict + if isinstance(manufacturer, dict): + manufacturer = self.nb.dcim.manufacturers.get(name=manufacturer['name']) + + return self.nb.dcim.device_types.create( + model=model, + manufacturer=manufacturer.id if hasattr(manufacturer, 'id') else manufacturer, + slug=slug, + **kwargs + ) + + def get_device_types(self, **kwargs): + """Get device types with optional filters""" + return self.nb.dcim.device_types.filter(**kwargs) + + def create_interface(self, name, device_id, interface_type, **kwargs): + """Create a new interface""" + return self.nb.dcim.interfaces.create( + name=name, + device=device_id, + type=interface_type, + **kwargs + ) + + def get_interfaces(self, **kwargs): + """Get interfaces with optional filters""" + return self.nb.dcim.interfaces.filter(**kwargs) + + def get_interfaces_custom(self, limit, offset, **kwargs): + """Get interfaces with pagination""" + return self.nb.dcim.interfaces.filter(limit=limit, offset=offset, **kwargs) + + def create_interface_connection(self, termination_a_id, termination_b_id, termination_a_type, termination_b_type, **kwargs): + """Create a new cable connection between interfaces""" + data = { + "termination_a_type": termination_a_type, + "termination_a_id": termination_a_id, + "termination_b_type": termination_b_type, + "termination_b_id": termination_b_id + } + return self.nb.dcim.cables.create(**data, **kwargs) + + def create_device_bay(self, name, device_id, installed_device_id=None, **kwargs): + """Create a new device bay""" + data = { + "name": name, + "device": device_id + } + if installed_device_id: + data["installed_device"] = installed_device_id + + return self.nb.dcim.device_bays.create(**data, **kwargs) + + def get_device_bays(self, **kwargs): + """Get device bays with optional filters""" + return self.nb.dcim.device_bays.filter(**kwargs) + + def create_rack(self, name, site_name, **kwargs): + """Create a new rack""" + # Get site ID from name + site = self.nb.dcim.sites.get(name=site_name) + + return self.nb.dcim.racks.create( + name=name, + site=site.id if site else site_name, + **kwargs + ) + + def create_reservation(self, rack_num, units, description, user, **kwargs): + """Create a rack reservation""" + return self.nb.dcim.rack_reservations.create( + rack=rack_num, + units=units, + description=description, + user=user, + **kwargs + ) + + def create_cable(self, termination_a_id, termination_b_id, termination_a_type, termination_b_type, **kwargs): + """Create a new cable""" + data = { + "termination_a_type": termination_a_type, + "termination_a_id": termination_a_id, + "termination_b_type": termination_b_type, + "termination_b_id": termination_b_id + } + return self.nb.dcim.cables.create(**data, **kwargs) + + def get_cables(self, **kwargs): + """Get cables with optional filters""" + return self.nb.dcim.cables.filter(**kwargs) + + +class IpamWrapper: + """Wrapper for IPAM endpoints""" + + def __init__(self, nb): + self.nb = nb + + def create_vlan_group(self, name, slug, **kwargs): + """Create a new VLAN group""" + return self.nb.ipam.vlan_groups.create(name=name, slug=slug, **kwargs) + + def get_vlan_groups(self, **kwargs): + """Get VLAN groups with optional filters""" + return self.nb.ipam.vlan_groups.filter(**kwargs) + + def create_ip_range(self, start_address, end_address, **kwargs): + """Create a new IP range""" + return self.nb.ipam.ip_ranges.create( + start_address=start_address, + end_address=end_address, + **kwargs + ) + + def get_ip_ranges(self, **kwargs): + """Get IP ranges with optional filters""" + return self.nb.ipam.ip_ranges.filter(**kwargs) + + def create_vlan(self, vid, vlan_name, **kwargs): + """Create a new VLAN""" + # Handle group if it's a dict + if 'group' in kwargs and isinstance(kwargs['group'], dict): + group = self.nb.ipam.vlan_groups.get(name=kwargs['group']['name']) + kwargs['group'] = group.id if group else None + + return self.nb.ipam.vlans.create(vid=vid, name=vlan_name, **kwargs) + + def create_ip_prefix(self, prefix, **kwargs): + """Create a new IP prefix""" + # Handle VLAN if it's a dict + if 'vlan' in kwargs and isinstance(kwargs['vlan'], dict) and kwargs['vlan'] is not None: + vlan = self.nb.ipam.vlans.get(id=kwargs['vlan']['id']) + kwargs['vlan'] = vlan.id if vlan else None + + return self.nb.ipam.prefixes.create(prefix=prefix, **kwargs) + + def get_ip_prefixes(self, **kwargs): + """Get IP prefixes with optional filters""" + if 'tag' in kwargs: + return self.nb.ipam.prefixes.filter(tag=kwargs['tag']) + return self.nb.ipam.prefixes.filter(**kwargs) + + def create_ip_address(self, address, **kwargs): + """Create a new IP address""" + # Handle assigned object + if 'assigned_object_id' in kwargs and 'assigned_object_type' in kwargs: + kwargs['assigned_object'] = { + 'id': kwargs.pop('assigned_object_id'), + 'object_type': kwargs.pop('assigned_object_type') + } + + # Handle device or VM in assigned object + if 'assigned_object' in kwargs and isinstance(kwargs['assigned_object'], dict): + if 'device' in kwargs['assigned_object'] and isinstance(kwargs['assigned_object']['device'], dict): + device_name = kwargs['assigned_object']['device'].get('name', kwargs['assigned_object']['device'].get('id')) + if device_name: + device = self.nb.dcim.devices.get(name=device_name) + if device: + kwargs['assigned_object']['device'] = device.id + + if 'virtual_machine' in kwargs['assigned_object'] and isinstance(kwargs['assigned_object']['virtual_machine'], dict): + vm_name = kwargs['assigned_object']['virtual_machine'].get('name', kwargs['assigned_object']['virtual_machine'].get('id')) + if vm_name: + vm = self.nb.virtualization.virtual_machines.get(name=vm_name) + if vm: + kwargs['assigned_object']['virtual_machine'] = vm.id + + return self.nb.ipam.ip_addresses.create(address=address, **kwargs) + + def get_ip_addresses(self, **kwargs): + """Get IP addresses with optional filters""" + if 'tag' in kwargs: + return self.nb.ipam.ip_addresses.filter(tag=kwargs['tag']) + return self.nb.ipam.ip_addresses.filter(**kwargs) + + def create_service(self, device, name, ports, protocol, **kwargs): + """Create a new service""" + # Handle device if it's a string + if isinstance(device, str): + device = self.nb.dcim.devices.get(name=device) + + return self.nb.ipam.services.create( + device=device.id if hasattr(device, 'id') else device, + name=name, + ports=ports, + protocol=protocol, + **kwargs + ) + + def get_services(self, **kwargs): + """Get services with optional filters""" + return self.nb.ipam.services.filter(**kwargs) + + +class VirtualizationWrapper: + """Wrapper for Virtualization endpoints""" + + def __init__(self, nb): + self.nb = nb + + def create_cluster_type(self, name, slug, **kwargs): + """Create a new cluster type""" + return self.nb.virtualization.cluster_types.create(name=name, slug=slug, **kwargs) + + def get_cluster_types(self, **kwargs): + """Get cluster types with optional filters""" + return self.nb.virtualization.cluster_types.filter(**kwargs) + + def create_cluster(self, name, cluster_type, **kwargs): + """Create a new cluster""" + # Get cluster type if it's a string + if isinstance(cluster_type, str): + cluster_type = self.nb.virtualization.cluster_types.get(name=cluster_type) + + return self.nb.virtualization.clusters.create( + name=name, + type=cluster_type.id if hasattr(cluster_type, 'id') else cluster_type, + **kwargs + ) + + def get_clusters(self, **kwargs): + """Get clusters with optional filters""" + return self.nb.virtualization.clusters.filter(**kwargs) + + def create_virtual_machine(self, name, cluster_name, **kwargs): + """Create a new virtual machine""" + # Get cluster if it's a string + cluster = self.nb.virtualization.clusters.get(name=cluster_name) + + return self.nb.virtualization.virtual_machines.create( + name=name, + cluster=cluster.id if cluster else cluster_name, + **kwargs + ) + + def get_virtual_machines(self, **kwargs): + """Get virtual machines with optional filters""" + return self.nb.virtualization.virtual_machines.filter(**kwargs) + + def create_interface(self, name, virtual_machine, interface_type, **kwargs): + """Create a new VM interface""" + # Get VM if it's a string + if isinstance(virtual_machine, str): + virtual_machine = self.nb.virtualization.virtual_machines.get(name=virtual_machine) + + return self.nb.virtualization.interfaces.create( + name=name, + virtual_machine=virtual_machine.id if hasattr(virtual_machine, 'id') else virtual_machine, + type=interface_type, + **kwargs + ) + + def get_interfaces(self, **kwargs): + """Get VM interfaces with optional filters""" + return self.nb.virtualization.interfaces.filter(**kwargs) + + def create_service(self, virtual_machine, name, ports, protocol, **kwargs): + """Create a new service for a VM""" + # Handle VM if it's a string + if isinstance(virtual_machine, str): + virtual_machine = self.nb.virtualization.virtual_machines.get(name=virtual_machine) + + return self.nb.ipam.services.create( + virtual_machine=virtual_machine.id if hasattr(virtual_machine, 'id') else virtual_machine, + name=name, + ports=ports, + protocol=protocol, + **kwargs + ) + + +class TenancyWrapper: + """Wrapper for Tenancy endpoints""" + + def __init__(self, nb): + self.nb = nb + + def get_tenants(self, **kwargs): + """Get tenants with optional filters""" + return self.nb.tenancy.tenants.filter(**kwargs) + + def create_tenant(self, name, slug, **kwargs): + """Create a new tenant""" + return self.nb.tenancy.tenants.create(name=name, slug=slug, **kwargs) + + def get_tenant_groups(self, **kwargs): + """Get tenant groups with optional filters""" + return self.nb.tenancy.tenant_groups.filter(**kwargs) + + def create_tenant_group(self, name, slug, **kwargs): + """Create a new tenant group""" + return self.nb.tenancy.tenant_groups.create(name=name, slug=slug, **kwargs) + + +class ExtrasWrapper: + """Wrapper for Extras endpoints""" + + def __init__(self, nb): + self.nb = nb + + def create_tag(self, name, slug, **kwargs): + """Create a new tag""" + return self.nb.extras.tags.create(name=name, slug=slug, **kwargs) + + def get_tags(self, **kwargs): + """Get tags with optional filters""" + return self.nb.extras.tags.filter(**kwargs) + + def create_custom_field(self, name, type, **kwargs): + """Create a new custom field""" + return self.nb.extras.custom_fields.create(name=name, type=type, **kwargs) + + def get_custom_fields(self, **kwargs): + """Get custom fields with optional filters""" + return self.nb.extras.custom_fields.filter(**kwargs) + + def create_export_template(self, name, content_type, template_code, **kwargs): + """Create a new export template""" + return self.nb.extras.export_templates.create( + name=name, + content_type=content_type, + template_code=template_code, + **kwargs + ) + + def create_object_change(self, changed_object_type, changed_object_id, action, **kwargs): + """Create a record of an object change""" + return self.nb.extras.object_changes.create( + changed_object_type=changed_object_type, + changed_object_id=changed_object_id, + action=action, + **kwargs + ) + + +# Create a replacement for the original NetBox class +ExtendedNetBox = NetBoxWrapper +# Make it available as NetBox for existing code +NetBox = ExtendedNetBox diff --git a/migration/db.py b/migration/db.py new file mode 100644 index 0000000..c3beb66 --- /dev/null +++ b/migration/db.py @@ -0,0 +1,220 @@ +""" +Database helper functions for accessing Racktables data +""" +from migration.utils import get_db_connection, get_cursor +from migration.config import INTERFACE_NAME_MAPPINGS + +def getRackHeight(rackId): + """ + Get the height of a rack from Racktables + + Args: + rackId: ID of the rack + + Returns: + int: Height of the rack in units, or 0 if not found + """ + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT uint_value FROM AttributeValue WHERE object_id=%s AND attr_id=27", (rackId,)) + result = cursor.fetchone() + return result["uint_value"] if result else 0 + +def get_hw_type(racktables_object_id, hw_types): + """ + Get the hardware type for a given Racktables object + + Args: + racktables_object_id: ID of the object in Racktables + hw_types: Dictionary mapping hw type IDs to names + + Returns: + str: Hardware type name, or None if not found + """ + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT uint_value FROM AttributeValue WHERE object_id=%s AND attr_id=2", (racktables_object_id,)) + uint = cursor.fetchone() + + # If uint_value is not in hw_types, return a default or the uint_value as a string + if uint: + hw_type = hw_types.get(uint["uint_value"], f"Unknown Type ({uint['uint_value']})") + return hw_type + + return None + +def getRowsAtSite(siteId): + """ + Get all rows at a given site + + Args: + siteId: ID of the site + + Returns: + list: List of row dictionaries + """ + rows = [] + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT child_entity_id FROM EntityLink WHERE parent_entity_type='location' AND parent_entity_id=%s AND child_entity_type='row'", (siteId,)) + rowIds = cursor.fetchall() + for rowId in rowIds: + cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE id=%s", (rowId["child_entity_id"],)) + rows += cursor.fetchall() + return rows + +def getRacksAtRow(rowId): + """ + Get all racks in a given row + + Args: + rowId: ID of the row + + Returns: + list: List of rack dictionaries + """ + racks = [] + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT child_entity_id FROM EntityLink WHERE parent_entity_type='row' AND parent_entity_id=%s AND child_entity_type='rack'", (rowId,)) + rackIds = cursor.fetchall() + for rackId in rackIds: + cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE id=%s", (rackId["child_entity_id"],)) + racks += cursor.fetchall() + return racks + +def getAtomsAtRack(rackId): + """ + Get all atoms (placement units) in a rack + + Args: + rackId: ID of the rack + + Returns: + list: List of atom dictionaries + """ + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT rack_id,unit_no,atom,state,object_id FROM RackSpace WHERE rack_id=%s", (rackId,)) + return cursor.fetchall() + +def getTags(entity_realm, entity_id): + """ + Get all tags for a given entity + + Args: + entity_realm: Type of entity (e.g., 'object', 'rack') + entity_id: ID of the entity + + Returns: + list: List of tag dictionaries + """ + tags = [] + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT tag_id FROM TagStorage WHERE entity_id=%s AND entity_realm=%s", (entity_id, entity_realm)) + tag_ids = [x["tag_id"] for x in cursor.fetchall()] + for tag_id in tag_ids: + cursor.execute("SELECT tag FROM TagTree WHERE id=%s", (tag_id,)) + tags += cursor.fetchall() + return [{'name': tag["tag"]} for tag in tags] + +def getDeviceType(objtype_id): + """ + Get the device type name for a given object type ID + + Args: + objtype_id: Object type ID + + Returns: + str: Device type name, or None if not found + """ + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT dict_key,dict_value FROM Dictionary WHERE dict_key=%s", (objtype_id,)) + result = cursor.fetchone() + return result["dict_value"] if result else None + +def get_custom_fields(racktables_object_id, slugified_attributes=None, initial_dict=None): + """ + Get all custom field values for a given object + + Args: + racktables_object_id: ID of the object in Racktables + slugified_attributes: Dictionary mapping attribute IDs to slugified names + initial_dict: Initial dictionary to populate (optional) + + Returns: + dict: Dictionary of custom field values + """ + custom_fields = initial_dict if initial_dict else dict() + + if slugified_attributes is None: + slugified_attributes = {} + + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT attr_id,string_value,uint_value FROM AttributeValue WHERE object_id=%s", (racktables_object_id,)) + attributes = cursor.fetchall() + + for attr in attributes: + attr_id = attr["attr_id"] + string_value = attr["string_value"] + uint_value = attr["uint_value"] + + # Skip specific known attributes or add more as needed + if attr_id in (2, 27, 10014): + continue + + # Only process if the attribute ID is in the slugified_attributes + if attr_id in slugified_attributes: + custom_fields[slugified_attributes[attr_id]] = string_value if string_value else uint_value + + return custom_fields + +def device_is_in_cluster(device_id): + """ + Check if a device is in a VM cluster + + Args: + device_id: ID of the device + + Returns: + tuple: (is_in_cluster, cluster_name, parent_entity_ids) + """ + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT parent_entity_id FROM EntityLink WHERE parent_entity_type=\"object\" AND child_entity_id=%s", (device_id,)) + parent_entity_ids = [parent_entity_id["parent_entity_id"] for parent_entity_id in cursor.fetchall()] + + for parent_entity_id in parent_entity_ids: + cursor.execute("SELECT objtype_id,name FROM Object WHERE id=%s", (parent_entity_id,)) + result = cursor.fetchone() + if result: + parent_objtype_id, parent_name = result["objtype_id"], result["name"] + + if parent_objtype_id == 1505: # VM Cluster + return True, parent_name, parent_entity_ids + + return False, None, parent_entity_ids + +def change_interface_name(interface_name, objtype_id): + """ + Clean up interface names based on device type and standardization rules + + Args: + interface_name: Original interface name + objtype_id: Object type ID of the device + + Returns: + str: Standardized interface name + """ + interface_name = interface_name.strip() + + if objtype_id in (7, 8): # Router or Network Switch + for prefix in INTERFACE_NAME_MAPPINGS: + # Make sure the prefix is followed by a number + if interface_name.startswith(prefix) and len(interface_name) > len(prefix) and interface_name[len(prefix)] in "0123456789- ": + interface_name = interface_name.replace(prefix, INTERFACE_NAME_MAPPINGS[prefix], 1) + + return interface_name diff --git a/migration/devices.py b/migration/devices.py new file mode 100644 index 0000000..3c36d5d --- /dev/null +++ b/migration/devices.py @@ -0,0 +1,544 @@ +""" +Device creation and management functions +""" + +from racktables_netbox_migration.utils import pickleLoad, pickleDump +from slugify import slugify + +from racktables_netbox_migration.utils import ( + get_db_connection, get_cursor, pickleDump, error_log +) +from racktables_netbox_migration.db import ( + getAtomsAtRack, getTags, get_hw_type, getDeviceType, get_custom_fields, device_is_in_cluster +) +from racktables_netbox_migration.config import ( + PARENT_OBJTYPE_IDS, OBJTYPE_ID_NAMES, RACKTABLES_MANUFACTURERS, + PARENT_CHILD_OBJTYPE_ID_PAIRS, FIRST_ASCII_CHARACTER, + TARGET_TENANT, TARGET_TENANT_ID, TARGET_SITE +) + +# Global tracking of created objects +global_names = set() +global_devices = [] +global_device_roles = set() +global_manufacturers = set() +global_device_types = set() +global_physical_object_ids = set() +global_non_physical_object_ids = set() +asset_tags = set() +serials = dict() + +def get_manufacturer_role_type(racktables_object_id, objtype_id, height, is_full_depth): + """ + Determine manufacturer, role, and type for a device + + Args: + racktables_object_id: Object ID in Racktables + objtype_id: Object type ID + height: Device height in U + is_full_depth: Whether device is full depth + + Returns: + tuple: (manufacturer, device_role, device_type_model) + """ + original_device_type = getDeviceType(objtype_id) + manufacturer = original_device_type + + # Add the height to the type model, as well as the binary full_depth or not + hw_type = get_hw_type(racktables_object_id, serials) + if hw_type: + device_type = hw_type + + for racktables_manufacturer in RACKTABLES_MANUFACTURERS: + if device_type.startswith(racktables_manufacturer) or device_type.startswith(racktables_manufacturer+" "): + device_type = device_type.replace(racktables_manufacturer," ", 1).lstrip(" ") + manufacturer = racktables_manufacturer + else: + device_type = original_device_type + + device_type_model = "{}-{}U{}".format(device_type, height, "-full" if is_full_depth else "") + + return manufacturer, original_device_type, device_type_model + +def create_device_at_location(netbox, device_name, face, start_height, device_role, manufacturer, + device_type_model, site_name, rack_name, asset_no, racktables_device_id): + """ + Create a device at a specific location in a rack + + Args: + netbox: NetBox client instance + device_name: Name of the device + face: Rack face ('front', 'rear') + start_height: Starting rack unit + device_role: Device role name + manufacturer: Manufacturer name + device_type_model: Device type model + site_name: Site name + rack_name: Rack name + asset_no: Asset number + racktables_device_id: Original ID in Racktables + + Returns: + tuple: (device_name, device_id) + """ + global global_devices, global_names, global_device_roles, global_manufacturers, global_device_types, asset_tags + + # Check if device already exists at this location + name_at_location = None + id_at_location = None + + for device in global_devices: + if (face == device['face']['value'] and start_height == device['position'] and + device_role == device['device_role']['name'] and + manufacturer == device['device_type']['manufacturer']['name'] and + device_type_model == device['device_type']['model'] and + site_name == device['site']['name'] and rack_name == device['rack']['name']): + name_at_location = device['name'] + id_at_location = device['id'] + break + + if name_at_location is None: + # Use original name if unique, otherwise append counter + name_at_location = device_name + + if device_name in global_names: + name_counter = 1 + while True: + counter_name = device_name + ".{}".format(name_counter) + if counter_name not in global_names: + name_at_location = counter_name + break + else: + name_counter += 1 + + # Check if device is in a VM cluster + device_in_vm_cluster, device_vm_cluster_name, parent_entity_ids = device_is_in_cluster(racktables_device_id) + + # Get custom fields for this device + custom_fields = get_custom_fields(racktables_device_id) + + # Get serial number if available + serial = serials[racktables_device_id] if racktables_device_id in serials else "" + + # Handle asset tag duplicates + asset_no = asset_no.strip() if asset_no else None + if asset_no and asset_no in asset_tags: + asset_no = asset_no + "-1" + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create the device + try: + device = netbox.dcim.create_device( + custom_fields=custom_fields, + face=face, + cluster={"name": device_vm_cluster_name} if device_in_vm_cluster else None, + asset_tag=asset_no, + serial=serial, + position=start_height, + name=name_at_location, + device_role=device_role, + manufacturer={"name": manufacturer}, + device_type=device_type_model, + site_name=site_name, + rack={"name": rack_name}, + **tenant_param # Add tenant parameter + ) + + if asset_no: + asset_tags.add(asset_no) + + id_at_location = device['id'] + global_names.add(name_at_location) + global_devices.append(device) + + print(f"Created device {name_at_location} at {rack_name} U{start_height} {face}") + except Exception as e: + error_log(f"Error creating device {name_at_location}: {str(e)}") + return None, None + else: + print(f"Device {name_at_location} already exists at location") + + return name_at_location, id_at_location + +def create_racked_devices(netbox): + """ + Create devices in racks based on Racktables data + + Args: + netbox: NetBox client instance + """ + global global_physical_object_ids, global_device_roles, global_manufacturers, global_device_types + + print("Creating racked devices") + + # Load existing devices, names, roles, manufacturers, and types + # If tenant filtering is enabled, filter devices by tenant + device_filters = {} + if TARGET_TENANT_ID: + device_filters["tenant_id"] = TARGET_TENANT_ID + + global_devices = netbox.dcim.get_devices(**device_filters) + print(f"Got {len(global_devices)} existing devices") + + global_names = set(device['name'] for device in global_devices) + global_device_roles = set(role['name'] for role in netbox.dcim.get_device_roles()) + global_manufacturers = set(manufacturer['name'] for manufacturer in netbox.dcim.get_manufacturers()) + global_device_types = set(device_type['model'] for device_type in netbox.dcim.get_device_types()) + + # Load serial numbers for devices + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT object_id, string_value FROM AttributeValue WHERE attr_id=10014") + for row in cursor.fetchall(): + serials[row["object_id"]] = row["string_value"] if row["string_value"] else "" + + # Get racks from NetBox + rack_filters = {} + if TARGET_SITE: + rack_filters["site"] = TARGET_SITE + + racks = netbox.dcim.racks.filter(**rack_filters) + + # Process each rack and create devices + for rack in racks: + rack_name = rack['name'] + site_name = rack['site']['name'] + + # Skip if site filtering is enabled and this is not the target site + if TARGET_SITE and site_name != TARGET_SITE: + continue + + # Extract Racktables rack ID from name (temporary solution) + # In a production environment, you would store this mapping + rack_id = rack['id'] + + # Get atoms (device placements) for this rack from Racktables + atoms = getAtomsAtRack(rack_id) + + if atoms: + # Create devices based on atoms + create_devices_in_rack(netbox, atoms, rack_name, site_name, rack['id']) + + # Save tracking of physical devices for interface creation + pickleDump("global_physical_object_ids", global_physical_object_ids) + +def create_devices_in_rack(netbox, atoms, rack_name, site_name, rack_id): + """ + Create devices in a rack based on atoms data + + Args: + netbox: NetBox client instance + atoms: List of atom dictionaries + rack_name: Rack name + site_name: Site name + rack_id: Rack ID in NetBox + """ + # Put positions into dict based on Id + atoms_dict = {} + for atom in atoms: + key = str(atom["object_id"]) + if key not in atoms_dict: + atoms_dict[key] = [atom] + else: + atoms_dict[key].append(atom) + + # Find devices that may need to be split due to non-contiguous placement + added_atom_objects = {} + separated_Ids = False + + # Process devices in the rack + for Id in atoms_dict: + # Skip null ID (reservations) + if Id == "None": + continue + + real_id = int(Id) + + # Get device info from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id,name,label,objtype_id,has_problems,comment,asset_no FROM Object WHERE id=%s", (real_id,)) + info = cursor.fetchone() + + if not info: + continue + + objtype_id = info["objtype_id"] + device_name = info["name"] + asset_no = info["asset_no"] + + # Get device tags + device_tags = getTags("object", real_id) + + # Determine face and depth + if 'rear' not in [atom["atom"] for atom in atoms_dict[Id]]: + face = 'front' + is_full_depth = False + elif 'front' not in [atom["atom"] for atom in atoms_dict[Id]]: + face = 'rear' + is_full_depth = False + else: + face = 'front' # NetBox doesn't have 'both' + is_full_depth = True + + # Calculate height + start_height = min([atom["unit_no"] for atom in atoms_dict[Id]]) + height = max([atom["unit_no"] for atom in atoms_dict[Id]]) - start_height + 1 + + # Get device details + manufacturer, device_role, device_type_model = get_manufacturer_role_type( + real_id, objtype_id, height, is_full_depth + ) + + # Create device role if needed + if device_role not in global_device_roles: + netbox.dcim.create_device_role(device_role, "ffffff", slugify(device_role)) + global_device_roles.add(device_role) + + # Create manufacturer if needed + if manufacturer not in global_manufacturers: + netbox.dcim.create_manufacturer(manufacturer, slugify(manufacturer)) + global_manufacturers.add(manufacturer) + + # Adjust device type for parent devices + if objtype_id in PARENT_OBJTYPE_IDS: + device_type_model += "-parent" + + # Create device type if needed + if device_type_model not in global_device_types: + netbox.dcim.create_device_type( + model=device_type_model, + manufacturer={"name": manufacturer}, + slug=slugify(device_type_model), + u_height=height, + is_full_depth=is_full_depth, + tags=device_tags, + subdevice_role="parent" if objtype_id in PARENT_OBJTYPE_IDS else "" + ) + global_device_types.add(device_type_model) + + # Create the device + device_name, device_id = create_device_at_location( + netbox, device_name, face, start_height, device_role, manufacturer, + device_type_model, site_name, rack_name, asset_no, real_id + ) + + if device_name and device_id: + # Store device information for interface creation + global_physical_object_ids.add((device_name, info["id"], device_id, objtype_id)) + +def create_non_racked_devices(netbox): + """ + Create non-racked devices from Racktables in NetBox + + Args: + netbox: NetBox client instance + """ + global global_non_physical_object_ids + + print("Creating non-racked devices") + + # Load existing tracking of non-physical devices + global_non_physical_object_ids = pickleLoad("global_non_physical_object_ids", set()) + + # Process each object type + for objtype_id in OBJTYPE_ID_NAMES: + print(f"Processing {OBJTYPE_ID_NAMES[objtype_id]} devices") + + # Get all objects of this type from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id,name,label,asset_no,comment FROM Object WHERE objtype_id=%s", (objtype_id,)) + objs = cursor.fetchall() + + # Convert to the format expected by create_parent_child_devices + objs_list = [(obj["id"], obj["name"], obj["label"], obj["asset_no"], obj["comment"]) for obj in objs] + + # Create devices + children_without_parents = create_parent_child_devices(netbox, objs_list, objtype_id) + + # Try again for children whose parents weren't created yet + if children_without_parents: + create_parent_child_devices(netbox, children_without_parents, objtype_id) + + # Save tracking of non-physical devices for interface creation + pickleDump("global_non_physical_object_ids", global_non_physical_object_ids) + +def create_parent_child_devices(netbox, data, objtype_id): + """ + Create devices and establish parent-child relationships + + Args: + netbox: NetBox client instance + data: List of device data tuples + objtype_id: Object type ID + + Returns: + list: Devices that couldn't be created due to missing parents + """ + global global_non_physical_object_ids, asset_tags + + # Track devices that couldn't be created due to missing parents + not_created_parents = [] + + # Get existing data from NetBox + # If tenant filtering is enabled, filter devices by tenant + device_filters = {} + if TARGET_TENANT_ID: + device_filters["tenant_id"] = TARGET_TENANT_ID + + existing_device_names = set(device['name'].strip() for device in netbox.dcim.get_devices(**device_filters) if device['name']) + + # Map device bay names by parent device + existing_device_bays = {} + for device_bay in netbox.dcim.get_device_bays(): + parent_name = device_bay['device']['name'] + if parent_name not in existing_device_bays: + existing_device_bays[parent_name] = set() + existing_device_bays[parent_name].add(device_bay['name']) + + # Process each device + for racktables_device_id, object_name, label, asset_no, comment in data: + # Skip if no name + if not object_name: + continue + + object_name = object_name.strip() + + # Skip if already exists + if object_name in existing_device_names: + continue + + # Create device in the "None" site + site_name = "None" + + # Get device details + manufacturer, device_role, device_type_model = get_manufacturer_role_type( + racktables_device_id, objtype_id, 0, False + ) + + # Check if device is in a VM cluster + device_in_vm_cluster, device_vm_cluster_name, parent_entity_ids = device_is_in_cluster(racktables_device_id) + + # Determine if device is a child or parent + is_child = False + is_parent = False + subdevice_role = "" + is_child_parent_name = None + + # Check for parent-child relationships + for parent_from_pairs_objtype_id, child_from_pairs_objtype_id in PARENT_CHILD_OBJTYPE_ID_PAIRS: + if objtype_id == child_from_pairs_objtype_id: + # Check for parent + for parent_entity_id in parent_entity_ids: + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT objtype_id,name FROM Object WHERE id=%s", (parent_entity_id,)) + result = cursor.fetchone() + if result and result["objtype_id"] == parent_from_pairs_objtype_id: + is_child = True + is_child_parent_name = result["name"].strip() + break + + if is_child: + device_type_model += "-child" + subdevice_role = "child" + break + + elif objtype_id == parent_from_pairs_objtype_id: + is_parent = True + device_type_model += "-parent" + subdevice_role = "parent" + break + + # Create device type if needed + if device_type_model not in global_device_types: + try: + netbox.dcim.create_device_type( + model=device_type_model, + slug=slugify(device_type_model), + manufacturer={"name": manufacturer}, + u_height=0, + subdevice_role=subdevice_role + ) + global_device_types.add(device_type_model) + except Exception as e: + error_log(f"Error creating device type {device_type_model}: {str(e)}") + + # Get device tags and custom fields + device_tags = getTags("object", racktables_device_id) + custom_fields = get_custom_fields(racktables_device_id, {"Device_Label": label}) + serial = serials.get(racktables_device_id, "") + + # Handle asset tag duplicates + asset_no = asset_no.strip() if asset_no else None + if asset_no and asset_no in asset_tags: + asset_no = f"{asset_no}-1" + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create the device + try: + device = netbox.dcim.create_device( + name=object_name, + cluster={"name": device_vm_cluster_name} if device_in_vm_cluster else None, + asset_tag=asset_no, + serial=serial, + custom_fields=custom_fields, + device_type=device_type_model, + device_role=device_role, + site_name=site_name, + comment=comment[:200] if comment else "", + tags=device_tags, + **tenant_param # Add tenant parameter + ) + + if asset_no: + asset_tags.add(asset_no) + + # Track created device + global_non_physical_object_ids.add((object_name, racktables_device_id, device['id'], objtype_id)) + print(f"Created non-racked device: {object_name}") + + # Handle child device in parent's device bay + if is_child and is_child_parent_name: + # Find the parent device + parent_devices = netbox.dcim.get_devices(name=is_child_parent_name) + if parent_devices: + parent_device = parent_devices[0] + + # Determine new bay name + if is_child_parent_name in existing_device_bays: + try: + new_bay_number = max(int(bay.split('-')[1]) for bay in existing_device_bays[is_child_parent_name]) + 1 + except ValueError: + new_bay_number = 1 + else: + new_bay_number = 1 + + new_bay_name = f"bay-{new_bay_number}" + + # Create device bay + try: + bay = netbox.dcim.create_device_bay( + name=new_bay_name, + device_id=parent_device['id'], + installed_device_id=device['id'] + ) + existing_device_bays.setdefault(is_child_parent_name, set()).add(new_bay_name) + print(f"Added {object_name} to {is_child_parent_name} in bay {new_bay_name}") + except Exception as e: + error_log(f"Error creating device bay for {object_name}: {str(e)}") + + except Exception as e: + error_log(f"Error creating device {object_name}: {str(e)}") + not_created_parents.append((racktables_device_id, object_name, label, asset_no, comment)) + + return not_created_parents diff --git a/migration/extended/__init__.py b/migration/extended/__init__.py new file mode 100644 index 0000000..2371537 --- /dev/null +++ b/migration/extended/__init__.py @@ -0,0 +1,3 @@ +""" +Extended migration components for additional Racktables data +""" diff --git a/migration/extended/available_subnets.py b/migration/extended/available_subnets.py new file mode 100644 index 0000000..dfed1d1 --- /dev/null +++ b/migration/extended/available_subnets.py @@ -0,0 +1,391 @@ +""" +Functions for creating available subnet prefixes with improved detection +""" +import ipaddress +import requests +from migration.utils import error_log, ensure_tag_exists +from migration.config import NB_HOST, NB_PORT, NB_TOKEN, NB_USE_SSL, TARGET_TENANT_ID + +def create_available_prefixes(netbox): + """ + Create available subnet prefixes using NetBox API + + Args: + netbox: NetBox client instance + """ + print("\nCreating available subnet prefixes using NetBox API...") + + # Import helpers + from migration.netbox_status import get_valid_status_choices, determine_prefix_status + from migration.site_tenant import get_site_tenant_params + + # Get valid status choices + valid_statuses = get_valid_status_choices(netbox, 'prefix') + print(f"Valid prefix statuses in your NetBox: {', '.join(valid_statuses)}") + + # Create the Available tag if it doesn't exist + tag_exists = ensure_tag_exists(netbox, "Available") + + # Get site and tenant parameters + association_params = get_site_tenant_params() + + # Configure API access + protocol = "https" if NB_USE_SSL else "http" + api_url = f"{protocol}://{NB_HOST}:{NB_PORT}/api" + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json", + "Accept": "application/json" + } + + # Get all prefixes that could contain available prefixes + existing_prefixes = list(netbox.ipam.get_ip_prefixes()) + + # Try to analyze a sample prefix to understand structure + if existing_prefixes and len(existing_prefixes) > 0: + sample = existing_prefixes[0] + print(f"DEBUG: Sample prefix type: {type(sample)}") + if hasattr(sample, '__dict__'): + print(f"DEBUG: Sample prefix attrs: {dir(sample)[:5]}...") + + parent_prefixes = [] + + # Get all possible parent prefixes - use less strict filtering + for p in existing_prefixes: + try: + # Extract prefix string regardless of response format + prefix_str = None + if hasattr(p, 'prefix'): + prefix_str = p.prefix + elif isinstance(p, dict) and 'prefix' in p: + prefix_str = p['prefix'] + else: + # Try accessing as dictionary even if it's an object + try: + prefix_str = p['prefix'] + except: + # Last resort - try string conversion + prefix_str = str(p) + if '/' not in prefix_str: + continue + + if not prefix_str: + continue + + # Don't filter as strictly - include all potential parents + parent_prefixes.append(p) + + except Exception as e: + error_log(f"Error processing potential parent prefix: {str(e)}") + + print(f"Found {len(parent_prefixes)} potential parent prefixes") + available_count = 0 + + # Process each parent to find available subnets + for parent in parent_prefixes: + # Extract parent ID and prefix string + parent_id = None + if hasattr(parent, 'id'): + parent_id = parent.id + elif isinstance(parent, dict) and 'id' in parent: + parent_id = parent['id'] + + if not parent_id: + continue + + # Extract prefix for logging + parent_prefix = None + if hasattr(parent, 'prefix'): + parent_prefix = parent.prefix + elif isinstance(parent, dict) and 'prefix' in parent: + parent_prefix = parent['prefix'] + + # Get available prefixes directly from API + available_url = f"{api_url}/ipam/prefixes/{parent_id}/available-prefixes/" + + try: + response = requests.get( + available_url, + headers=headers, + verify=NB_USE_SSL + ) + + if response.status_code != 200: + error_log(f"Error getting available prefixes for {parent_prefix}: {response.text}") + continue + + available_prefixes = response.json() + if not available_prefixes: + continue + + print(f"Found {len(available_prefixes)} available prefixes in {parent_prefix}") + + # Process found available prefixes - minimal filtering + for available in available_prefixes: + prefix_str = available['prefix'] + + # Use the improved status determination for available prefixes + status = determine_prefix_status("", "Available prefix", valid_statuses) + + # Create the available prefix - don't filter by prefix length + try: + # Only add tags if the tag exists + tags_param = [{'name': 'Available'}] if tag_exists else [] + + # Prepare params + params = { + 'prefix': prefix_str, + 'status': status, + 'description': "Available prefix", + 'tags': tags_param + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the prefix with all parameters + netbox.ipam.create_ip_prefix(**params) + available_count += 1 + print(f"Created available prefix: {prefix_str} with status '{status}'") + except Exception as e: + error_log(f"Error creating available prefix {prefix_str}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + except Exception as e: + error_log(f"Error processing parent prefix {parent_prefix}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + print(f"Created {available_count} available subnet prefixes using API") + +def create_available_subnets(netbox): + """ + Identify and create available subnets in gaps between allocated prefixes + + Args: + netbox: NetBox client instance + """ + print("\nAnalyzing IP space for available subnets...") + + # Import helpers + from migration.netbox_status import get_valid_status_choices, determine_prefix_status + from migration.site_tenant import get_site_tenant_params + + # Get valid status choices + valid_statuses = get_valid_status_choices(netbox, 'prefix') + + # Create the Available tag if it doesn't exist + tag_exists = ensure_tag_exists(netbox, "Available") + + # Get site and tenant parameters + association_params = get_site_tenant_params() + + # Get all existing prefixes + existing_prefixes = list(netbox.ipam.get_ip_prefixes()) + + # Group prefixes by parent networks + network_groups = {} + for prefix in existing_prefixes: + try: + # Extract prefix string + prefix_str = None + if hasattr(prefix, 'prefix'): + prefix_str = prefix.prefix + elif isinstance(prefix, dict) and 'prefix' in prefix: + prefix_str = prefix['prefix'] + else: + continue + + network = ipaddress.ip_network(prefix_str) + + # Less strict filtering + if network.prefixlen >= 31 and isinstance(network, ipaddress.IPv4Network): + continue + if network.prefixlen >= 127 and isinstance(network, ipaddress.IPv6Network): + continue + + # Find the smallest containing prefix + parent_prefix = None + for potential_parent in existing_prefixes: + # Extract parent prefix string + parent_str = None + if hasattr(potential_parent, 'prefix'): + parent_str = potential_parent.prefix + elif isinstance(potential_parent, dict) and 'prefix' in potential_parent: + parent_str = potential_parent['prefix'] + else: + continue + + if prefix_str == parent_str: + continue + + try: + parent_network = ipaddress.ip_network(parent_str) + + # Skip if potential parent has same/smaller mask + if parent_network.prefixlen >= network.prefixlen: + continue + + if network.subnet_of(parent_network): + if not parent_prefix or ipaddress.ip_network(parent_prefix).prefixlen > parent_network.prefixlen: + parent_prefix = parent_str + except Exception: + continue + + # Group by parent prefix + if parent_prefix: + if parent_prefix not in network_groups: + network_groups[parent_prefix] = [] + network_groups[parent_prefix].append(prefix) + except Exception as e: + continue + + # Track created available subnets + available_count = 0 + status_counts = {status: 0 for status in valid_statuses} + + # Process each network group to find gaps + for parent_prefix, child_prefixes in network_groups.items(): + try: + parent = ipaddress.ip_network(parent_prefix) + + # Sort child prefixes by network address + def get_network_addr(p): + p_str = None + if hasattr(p, 'prefix'): + p_str = p.prefix + elif isinstance(p, dict) and 'prefix' in p: + p_str = p['prefix'] + else: + return 0 + try: + return int(ipaddress.ip_network(p_str).network_address) + except: + return 0 + + child_prefixes.sort(key=get_network_addr) + + # Track previous network end + prev_end = int(parent.network_address) + + # Find gaps between consecutive prefixes + for child in child_prefixes: + # Extract child prefix + child_str = None + if hasattr(child, 'prefix'): + child_str = child.prefix + elif isinstance(child, dict) and 'prefix' in child: + child_str = child['prefix'] + else: + continue + + child_net = ipaddress.ip_network(child_str) + start = int(child_net.network_address) + + # If there's a gap between previous end and current start + if start > prev_end: + # Create available subnets in the gap - less filtering + try: + gap_network = ipaddress.ip_network((prev_end, parent.prefixlen)) + + # Determine suitable prefix sizes based on network type + prefix_sizes = [24, 25, 26, 27, 28, 29] if isinstance(parent, ipaddress.IPv4Network) else [64, 80, 96, 112] + + for new_prefix_len in prefix_sizes: + if new_prefix_len > parent.prefixlen: + try: + subnets = list(gap_network.subnets(new_prefix=new_prefix_len)) + + # Create first 2 available subnets of each size + for subnet in subnets[:2]: + if int(subnet.network_address) < start and int(subnet.broadcast_address) < start: + try: + # Only add tags if the tag exists + tags_param = [{'name': 'Available'}] if tag_exists else [] + + # Use the improved status determination + status = determine_prefix_status("", "Available subnet", valid_statuses) + status_counts[status] += 1 + + # Prepare params + params = { + 'prefix': str(subnet), + 'status': status, + 'description': "Available subnet", + 'tags': tags_param + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the prefix with all parameters + netbox.ipam.create_ip_prefix(**params) + available_count += 1 + print(f"Created available subnet: {subnet} with status '{status}'") + except Exception as e: + error_log(f"Error creating available subnet {subnet}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + except Exception: + continue + except Exception as e: + error_log(f"Error processing subnets for gap: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + # Update previous end for next iteration + prev_end = int(child_net.broadcast_address) + 1 + + # Check for gap between last child and end of parent + if prev_end < int(parent.broadcast_address): + try: + gap_network = ipaddress.ip_network((prev_end, parent.prefixlen)) + + # Determine suitable prefix sizes based on network type + prefix_sizes = [24, 25, 26, 27, 28, 29] if isinstance(parent, ipaddress.IPv4Network) else [64, 80, 96, 112] + + for new_prefix_len in prefix_sizes: + if new_prefix_len > parent.prefixlen: + try: + subnets = list(gap_network.subnets(new_prefix=new_prefix_len)) + + # Create first 2 available subnets of each size + for subnet in subnets[:2]: + try: + # Only add tags if the tag exists + tags_param = [{'name': 'Available'}] if tag_exists else [] + + # Use the improved status determination + status = determine_prefix_status("", "Available end gap subnet", valid_statuses) + status_counts[status] += 1 + + # Prepare params + params = { + 'prefix': str(subnet), + 'status': status, + 'description': "Available end gap subnet", + 'tags': tags_param + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the prefix with all parameters + netbox.ipam.create_ip_prefix(**params) + available_count += 1 + print(f"Created end gap subnet: {subnet} with status '{status}'") + except Exception as e: + error_log(f"Error creating end gap subnet {subnet}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + except Exception: + continue + except Exception as e: + error_log(f"Error creating end gap network: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + except Exception as e: + error_log(f"Error processing parent network {parent_prefix}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + print(f"Created {available_count} available subnet prefixes") + print("Status assignments:") + for status, count in status_counts.items(): + if count > 0: + print(f" - {status}: {count}") diff --git a/migration/extended/files.py b/migration/extended/files.py new file mode 100644 index 0000000..d5e4c5d --- /dev/null +++ b/migration/extended/files.py @@ -0,0 +1,148 @@ +""" +File attachment migration functions +""" +import os +import requests + +from racktables_netbox_migration.utils import error_log +from racktables_netbox_migration.config import NB_HOST, NB_PORT, NB_TOKEN, TARGET_SITE + +def migrate_files(cursor, netbox): + """ + Migrate file attachments from Racktables to NetBox + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating file attachments...") + + # Get device IDs in target site if site filtering is enabled + site_device_names = set() + if TARGET_SITE: + print(f"Filtering file attachments for site: {TARGET_SITE}") + site_devices = netbox.dcim.get_devices(site=TARGET_SITE) + site_device_names = set(device['name'] for device in site_devices) + + # Also include VMs in clusters at the target site + site_clusters = netbox.virtualization.get_clusters(site=TARGET_SITE) + for cluster in site_clusters: + cluster_vms = netbox.virtualization.get_virtual_machines(cluster_id=cluster['id']) + site_device_names.update(vm['name'] for vm in cluster_vms) + + # Get files from Racktables + cursor.execute("SELECT id, name, type, size, contents FROM File") + file_data = cursor.fetchall() + + # Track migrated files for reference + migrated_files = {} + + # Set up directory for file storage + file_dir = "racktables_files" + os.makedirs(file_dir, exist_ok=True) + + for file_id, file_name, file_type, file_size, file_contents in file_data: + # Save file locally + file_path = os.path.join(file_dir, f"{file_id}_{file_name}") + with open(file_path, "wb") as f: + f.write(file_contents) + + migrated_files[file_id] = { + "name": file_name, + "path": file_path, + "type": file_type, + "size": file_size + } + + print(f"Saved file: {file_name} (ID: {file_id})") + + # Now get file links to associate files with objects + cursor.execute(""" + SELECT FL.file_id, FL.entity_type, FL.entity_id, F.name + FROM FileLink FL + JOIN File F ON FL.file_id = F.id + """) + + for file_id, entity_type, entity_id, file_name in cursor.fetchall(): + if entity_type == 'object': + # Get the object name + cursor.execute("SELECT name, objtype_id FROM Object WHERE id = %s", (entity_id,)) + obj_data = cursor.fetchone() + + if not obj_data: + continue + + obj_name, objtype_id = obj_data + + # Skip if the name is empty + if not obj_name: + continue + + obj_name = obj_name.strip() + + # Skip if site filtering is enabled and this device is not in the target site + if TARGET_SITE and obj_name not in site_device_names: + continue + + # Determine if this is a device or VM + is_vm = (objtype_id == 1504) # VM objtype_id + + # Find the object in NetBox + if is_vm: + obj = netbox.virtualization.get_virtual_machines(name=obj_name) + else: + obj = netbox.dcim.get_devices(name=obj_name) + + if not obj: + error_log(f"Could not find object {obj_name} to attach file {file_name}") + continue + + obj = obj[0] + + # Update the object with file reference in custom fields + if is_vm: + url = f"http://{NB_HOST}:{NB_PORT}/api/virtualization/virtual-machines/{obj['id']}/" + else: + url = f"http://{NB_HOST}:{NB_PORT}/api/dcim/devices/{obj['id']}/" + + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + # Get current value if it exists + response = requests.get(url, headers=headers) + current_data = response.json() + + file_refs = current_data.get('custom_fields', {}).get('File_References', "") + if file_refs: + file_refs += f", {file_name} (from Racktables)" + else: + file_refs = f"{file_name} (from Racktables)" + + data = { + "custom_fields": { + "File_References": file_refs + } + } + + response = requests.patch(url, headers=headers, json=data) + if response.status_code in (200, 201): + print(f"Updated file reference for {obj_name}: {file_name}") + else: + error_log(f"Error updating file reference: {response.text}") + + # Create a summary document about migrated files + with open(os.path.join(file_dir, "migrated_files.txt"), "w") as f: + f.write("# Migrated Files from Racktables\n\n") + f.write("This document lists files migrated from Racktables to local storage.\n") + f.write("File references have been added to device custom fields.\n\n") + + f.write("## File List\n\n") + for file_id, file_info in migrated_files.items(): + f.write(f"- {file_info['name']} (ID: {file_id})\n") + f.write(f" Type: {file_info['type']}, Size: {file_info['size']} bytes\n") + f.write(f" Saved to: {file_info['path']}\n\n") + + print(f"File migration completed. Files saved to {file_dir} directory.") + print(f"See {os.path.join(file_dir, 'migrated_files.txt')} for a summary.") diff --git a/migration/extended/ip_ranges.py b/migration/extended/ip_ranges.py new file mode 100644 index 0000000..d275f88 --- /dev/null +++ b/migration/extended/ip_ranges.py @@ -0,0 +1,493 @@ +""" +IP range generation module to identify and create available IP ranges +""" +import ipaddress +from migration.utils import error_log, ensure_tag_exists +from migration.config import TARGET_SITE, IPV4_TAG, IPV6_TAG + +def create_ip_ranges_from_available_prefixes(netbox): + """ + Create IP ranges from available prefixes + + Args: + netbox: NetBox client instance + """ + print("\nCreating IP ranges from available prefixes...") + + # Create the Available tag if it doesn't exist + tag_exists = ensure_tag_exists(netbox, "Available") + + # Get all prefixes with Available tag + all_prefixes = list(netbox.ipam.get_ip_prefixes()) + available_prefixes = [] + + # Find prefixes with Available tag + for prefix in all_prefixes: + # Check for Available tag in different formats + has_available_tag = False + + # Method 1: Check for tags attribute + if hasattr(prefix, 'tags'): + tags = prefix.tags + for tag in tags: + if hasattr(tag, 'name') and tag.name == 'Available': + has_available_tag = True + break + elif isinstance(tag, dict) and tag.get('name') == 'Available': + has_available_tag = True + break + + # Method 2: Check for tags as dict key + elif isinstance(prefix, dict) and 'tags' in prefix: + tags = prefix['tags'] + for tag in tags: + if isinstance(tag, dict) and tag.get('name') == 'Available': + has_available_tag = True + break + + # Method 3: Check directly for tag as a property + elif hasattr(prefix, 'tag') and prefix.tag == 'Available': + has_available_tag = True + + # Method 4: Direct string search in serialized representation + elif 'Available' in str(prefix): + has_available_tag = True + + if has_available_tag: + available_prefixes.append(prefix) + + print(f"Found {len(available_prefixes)} available prefixes") + + # Debug prefix format if available + if available_prefixes: + sample = available_prefixes[0] + print(f"DEBUG: Available prefix sample type: {type(sample)}") + if hasattr(sample, '__dict__'): + print(f"DEBUG: Sample attributes: {dir(sample)[:5]}...") + + # Get existing IP ranges to avoid duplicates + existing_ranges = list(netbox.ipam.get_ip_ranges()) + existing_range_cidrs = set() + + for ip_range in existing_ranges: + # Extract addresses with multiple methods + start_ip = None + end_ip = None + + # Method 1: Direct attribute access + if hasattr(ip_range, 'start_address'): + start_ip = getattr(ip_range, 'start_address', '').split('/')[0] + if hasattr(ip_range, 'end_address'): + end_ip = getattr(ip_range, 'end_address', '').split('/')[0] + + # Method 2: Dictionary access + if start_ip is None and isinstance(ip_range, dict) and 'start_address' in ip_range: + start_ip = ip_range['start_address'].split('/')[0] + if end_ip is None and isinstance(ip_range, dict) and 'end_address' in ip_range: + end_ip = ip_range['end_address'].split('/')[0] + + # Only add if we have both addresses + if start_ip and end_ip: + existing_range_cidrs.add(f"{start_ip}-{end_ip}") + + ranges_created = 0 + + for prefix in available_prefixes: + # Try multiple methods to extract prefix string + prefix_str = None + + # Method 1: Direct attribute access + if hasattr(prefix, 'prefix'): + prefix_str = prefix.prefix + + # Method 2: Dictionary access + elif isinstance(prefix, dict) and 'prefix' in prefix: + prefix_str = prefix['prefix'] + + # Method 3: Direct string conversion + else: + try: + prefix_str = str(prefix) + if '/' not in prefix_str: + # Not a valid prefix + continue + except: + continue + + try: + prefix_obj = ipaddress.ip_network(prefix_str) + + # Skip very small prefixes - use less strict filtering + if prefix_obj.prefixlen >= 31 and isinstance(prefix_obj, ipaddress.IPv4Network): + continue + + # Create an IP range for the whole prefix + start_ip = prefix_obj.network_address + end_ip = prefix_obj.broadcast_address + range_cidr = f"{start_ip}-{end_ip}" + + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="Available IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range for available prefix: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + except Exception as e: + error_log(f"Error processing available prefix: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + print(f"Created {ranges_created} IP ranges from available prefixes") + +def create_ip_ranges(netbox): + """ + Create IP ranges from IP prefixes and addresses + + Args: + netbox: NetBox client instance + """ + print("\nGenerating IP ranges...") + + # Create the Available tag if it doesn't exist + tag_exists = ensure_tag_exists(netbox, "Available") + + # Get all prefixes + prefixes = list(netbox.ipam.get_ip_prefixes()) + print(f"Found {len(prefixes)} IP prefixes") + + # Get all IP addresses + ip_addresses = list(netbox.ipam.get_ip_addresses()) + print(f"Found {len(ip_addresses)} IP addresses") + + # Get existing IP ranges to avoid duplicates + existing_ranges = list(netbox.ipam.get_ip_ranges()) + existing_range_cidrs = set() + + for ip_range in existing_ranges: + # Extract addresses with multiple methods + start_ip = None + end_ip = None + + # Method 1: Direct attribute access + if hasattr(ip_range, 'start_address'): + start_ip = getattr(ip_range, 'start_address', '').split('/')[0] + if hasattr(ip_range, 'end_address'): + end_ip = getattr(ip_range, 'end_address', '').split('/')[0] + + # Method 2: Dictionary access + if start_ip is None and isinstance(ip_range, dict) and 'start_address' in ip_range: + start_ip = ip_range['start_address'].split('/')[0] + if end_ip is None and isinstance(ip_range, dict) and 'end_address' in ip_range: + end_ip = ip_range['end_address'].split('/')[0] + + # Only add if we have both addresses + if start_ip and end_ip: + existing_range_cidrs.add(f"{start_ip}-{end_ip}") + + print(f"Found {len(existing_ranges)} existing IP ranges") + + # Group prefixes by larger containing prefixes + network_groups = {} + standalone_prefixes = [] + + for prefix in prefixes: + try: + # Extract prefix string + prefix_str = None + if hasattr(prefix, 'prefix'): + prefix_str = prefix.prefix + elif isinstance(prefix, dict) and 'prefix' in prefix: + prefix_str = prefix['prefix'] + else: + continue + + prefix_net = ipaddress.ip_network(prefix_str) + parent_found = False + + # Skip very small prefixes for analysis - less strict filtering + if prefix_net.prefixlen >= 31 and isinstance(prefix_net, ipaddress.IPv4Network): + continue + if prefix_net.prefixlen >= 127 and isinstance(prefix_net, ipaddress.IPv6Network): + continue + + # Find parent prefix + for potential_parent in prefixes: + # Extract parent prefix string + parent_str = None + if hasattr(potential_parent, 'prefix'): + parent_str = potential_parent.prefix + elif isinstance(potential_parent, dict) and 'prefix' in potential_parent: + parent_str = potential_parent['prefix'] + else: + continue + + if prefix_str == parent_str: + continue + + try: + parent_net = ipaddress.ip_network(parent_str) + + # Skip if potential parent has same or higher prefix length + if parent_net.prefixlen >= prefix_net.prefixlen: + continue + + if prefix_net.subnet_of(parent_net): + if parent_str not in network_groups: + network_groups[parent_str] = [] + network_groups[parent_str].append(prefix) + parent_found = True + break + except: + continue + + if not parent_found: + standalone_prefixes.append(prefix) + except Exception as e: + continue + + # Process each network group to find gaps + ranges_created = 0 + + # Helper function to extract prefix string + def get_prefix_str(p): + if hasattr(p, 'prefix'): + return p.prefix + elif isinstance(p, dict) and 'prefix' in p: + return p['prefix'] + return None + + # Helper function to extract network address + def get_network_addr(p): + p_str = get_prefix_str(p) + if not p_str: + return 0 + try: + return int(ipaddress.ip_network(p_str).network_address) + except: + return 0 + + for parent_prefix, child_prefixes in network_groups.items(): + try: + parent = ipaddress.ip_network(parent_prefix) + + # Sort child prefixes by network address + child_prefixes.sort(key=get_network_addr) + + # Process gaps between child prefixes + prev_end = None + + for child in child_prefixes: + child_str = get_prefix_str(child) + if not child_str: + continue + + try: + current = ipaddress.ip_network(child_str) + current_start = int(current.network_address) + + # If this is not the first prefix and there's a gap + if prev_end is not None and current_start > prev_end + 1: + # We found a gap between prev_end and current_start + start_ip = ipaddress.ip_address(prev_end + 1) + end_ip = ipaddress.ip_address(current_start - 1) + + # Create an IP range for this gap + range_cidr = f"{start_ip}-{end_ip}" + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="Gap IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + # Update prev_end for next iteration + prev_end = int(current.broadcast_address) + except Exception: + continue + + # Check for gap after the last child prefix + if prev_end is not None and prev_end < int(parent.broadcast_address): + # Gap between last child and end of parent + start_ip = ipaddress.ip_address(prev_end + 1) + end_ip = ipaddress.ip_address(int(parent.broadcast_address)) + + # Create IP range for this gap + range_cidr = f"{start_ip}-{end_ip}" + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="End gap IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + except Exception as e: + error_log(f"Error processing parent network {parent_prefix}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + # Process standalone prefixes + for prefix in standalone_prefixes: + try: + prefix_str = get_prefix_str(prefix) + if not prefix_str: + continue + + network = ipaddress.ip_network(prefix_str) + + # Check for addresses within this prefix + contained_addresses = [] + for ip in ip_addresses: + try: + # Extract IP address string + ip_addr_str = None + if hasattr(ip, 'address'): + ip_addr_str = ip.address + elif isinstance(ip, dict) and 'address' in ip: + ip_addr_str = ip['address'] + else: + continue + + addr = ipaddress.ip_address(ip_addr_str.split('/')[0]) + if addr in network: + contained_addresses.append(addr) + except: + continue + + if not contained_addresses: + # No addresses in this prefix, create range for whole prefix + start_ip = network.network_address + end_ip = network.broadcast_address + range_cidr = f"{start_ip}-{end_ip}" + + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="Empty prefix IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range for empty prefix: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + else: + # Has addresses, find gaps + contained_addresses.sort() + + # Check for gap at the beginning + if int(contained_addresses[0]) > int(network.network_address): + start_ip = network.network_address + end_ip = ipaddress.ip_address(int(contained_addresses[0]) - 1) + range_cidr = f"{start_ip}-{end_ip}" + + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="Beginning gap IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + # Check for gaps between addresses + for i in range(len(contained_addresses) - 1): + curr_addr = int(contained_addresses[i]) + next_addr = int(contained_addresses[i + 1]) + + if next_addr > curr_addr + 1: + start_ip = ipaddress.ip_address(curr_addr + 1) + end_ip = ipaddress.ip_address(next_addr - 1) + range_cidr = f"{start_ip}-{end_ip}" + + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="Middle gap IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + # Check for gap at the end + if int(contained_addresses[-1]) < int(network.broadcast_address): + start_ip = ipaddress.ip_address(int(contained_addresses[-1]) + 1) + end_ip = network.broadcast_address + range_cidr = f"{start_ip}-{end_ip}" + + if range_cidr not in existing_range_cidrs: + try: + # Only add tags if the tag exists + tags_param = [{"name": "Available"}] if tag_exists else [] + + ip_range = netbox.ipam.create_ip_range( + start_address=str(start_ip), + end_address=str(end_ip), + description="End gap IP range", + tags=tags_param + ) + existing_range_cidrs.add(range_cidr) + ranges_created += 1 + print(f"Created IP range: {start_ip} - {end_ip}") + except Exception as e: + error_log(f"Error creating IP range {start_ip} - {end_ip}: {str(e)}") + print(f"DEBUG ERROR: {str(e)}") + + except Exception as e: + continue + + print(f"IP range generation completed. Created {ranges_created} IP ranges.") diff --git a/migration/extended/load_balancer.py b/migration/extended/load_balancer.py new file mode 100644 index 0000000..016e773 --- /dev/null +++ b/migration/extended/load_balancer.py @@ -0,0 +1,275 @@ +""" +Load balancing data migration functions +""" +import ipaddress +import requests +from slugify import slugify + +from racktables_netbox_migration.utils import error_log +from racktables_netbox_migration.config import NB_HOST, NB_PORT, NB_TOKEN, TARGET_SITE + +def migrate_load_balancing(cursor, netbox): + """ + Migrate load balancing data from Racktables to NetBox + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating load balancing data...") + + # Get existing IP addresses from NetBox + existing_ips = {} + for ip in netbox.ipam.get_ip_addresses(): + existing_ips[ip['address']] = ip['id'] + + # Check if IPv4LB table exists + try: + cursor.execute("SHOW TABLES LIKE 'IPv4LB'") + if not cursor.fetchone(): + print("IPv4LB table not found in database. Skipping load balancer migration.") + return + + # Check table schema to determine available columns + cursor.execute("SHOW COLUMNS FROM IPv4LB") + lb_columns = {col['Field']: True for col in cursor.fetchall()} + print(f"Found IPv4LB table with columns: {', '.join(lb_columns.keys())}") + + # Build query dynamically based on available columns + query_fields = ["prio", "vsconfig", "rsconfig"] + + # Add rspool if it exists + if 'rspool' in lb_columns: + query_fields.append("rspool") + else: + print("Column 'rspool' not found in IPv4LB table, will use NULL values") + + # Add comment if it exists + if 'comment' in lb_columns: + query_fields.append("comment") + else: + print("Column 'comment' not found in IPv4LB table, will use empty values") + + # Construct the query + query = f"SELECT {', '.join(query_fields)} FROM IPv4LB" + cursor.execute(query) + + lb_entries = cursor.fetchall() + lb_count = 0 + + for entry in lb_entries: + # Extract values, handling possible absent columns + prio = entry['prio'] + vsconfig = entry['vsconfig'] + rsconfig = entry['rsconfig'] + rspool = entry['rspool'] if 'rspool' in lb_columns else None + comment = entry['comment'] if 'comment' in lb_columns else None + + # Parse the configs - these typically contain IP addresses and parameters + vs_parts = vsconfig.split(':') if vsconfig else [] + rs_parts = rsconfig.split(':') if rsconfig else [] + + # Extract VIP (Virtual IP) if available + vip = None + if len(vs_parts) > 0: + try: + vip = vs_parts[0] + # Validate this is an IP + ipaddress.ip_address(int(vip)) + except (ValueError, IndexError): + vip = None + + # Extract Real Server IP if available + rs_ip = None + if len(rs_parts) > 0: + try: + rs_ip = rs_parts[0] + # Validate this is an IP + ipaddress.ip_address(int(rs_ip)) + except (ValueError, IndexError): + rs_ip = None + + # If site filtering is enabled, check if these IPs are associated with devices in the target site + if TARGET_SITE: + # Skip implementation for brevity as it would require complex device association lookups + pass + + # If we have both IPs, create or update the LB relationship + if vip and rs_ip: + vip_cidr = f"{str(ipaddress.ip_address(int(vip)))}/32" + rs_ip_cidr = f"{str(ipaddress.ip_address(int(rs_ip)))}/32" + + # Update VIP with load balancer info + if vip_cidr in existing_ips: + url = f"http://{NB_HOST}:{NB_PORT}/api/ipam/ip-addresses/{existing_ips[vip_cidr]}/" + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + # Get current data + response = requests.get(url, headers=headers) + if response.status_code != 200: + error_log(f"Error getting IP {vip_cidr}: {response.text}") + continue + + current_data = response.json() + + # Update description and custom fields + description_text = current_data.get('description', '') + if description_text: + description_text += f"\nLB: {comment}" if comment else "\nLoad balancer VIP" + else: + description_text = f"LB: {comment}" if comment else "Load balancer VIP" + + # Format the full LB config for the custom field + lb_config = f"VS: {vsconfig}, RS: {rsconfig}, Priority: {prio}" + + data = { + "description": description_text[:200], + "custom_fields": { + "LB_Config": lb_config, + "RS_Pool": rspool if rspool else "" + }, + "role": "vip" # Set role to VIP + } + + # Update the custom fields of existing data + if 'custom_fields' in current_data and current_data['custom_fields']: + for key, value in current_data['custom_fields'].items(): + if key not in data['custom_fields'] and value: + data['custom_fields'][key] = value + + response = requests.patch(url, headers=headers, json=data) + if response.status_code in (200, 201): + lb_count += 1 + print(f"Updated load balancer information for VIP {vip_cidr}") + else: + error_log(f"Error updating load balancer for VIP {vip_cidr}: {response.text}") + + # Update Real Server IP with load balancer info + if rs_ip_cidr in existing_ips: + url = f"http://{NB_HOST}:{NB_PORT}/api/ipam/ip-addresses/{existing_ips[rs_ip_cidr]}/" + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + # Get current data + response = requests.get(url, headers=headers) + if response.status_code != 200: + error_log(f"Error getting IP {rs_ip_cidr}: {response.text}") + continue + + current_data = response.json() + + # Update description and custom fields + description_text = current_data.get('description', '') + if description_text: + description_text += f"\nLB: {comment}" if comment else "\nLoad balancer real server" + else: + description_text = f"LB: {comment}" if comment else "Load balancer real server" + + data = { + "description": description_text[:200], + "custom_fields": { + "LB_Pool": rspool if rspool else "", + "LB_Config": f"Part of pool {rspool if rspool else 'unknown'} for VIP {vip_cidr}" + } + } + + # Update the custom fields of existing data + if 'custom_fields' in current_data and current_data['custom_fields']: + for key, value in current_data['custom_fields'].items(): + if key not in data['custom_fields'] and value: + data['custom_fields'][key] = value + + response = requests.patch(url, headers=headers, json=data) + if response.status_code in (200, 201): + lb_count += 1 + print(f"Updated load balancer information for real server {rs_ip_cidr}") + else: + error_log(f"Error updating load balancer for real server {rs_ip_cidr}: {response.text}") + + except Exception as e: + error_log(f"Database error in load balancer migration: {str(e)}") + print(f"Database connection error: {str(e)}") + print("Skipping load balancer migration") + return + + # Check for RS Pool table + try: + cursor.execute("SHOW TABLES LIKE 'IPv4RSPool'") + if cursor.fetchone(): + # Check IPv4RSPool schema + cursor.execute("SHOW COLUMNS FROM IPv4RSPool") + rspool_columns = {col['Field']: True for col in cursor.fetchall()} + + # Build query dynamically + query_fields = [] + + if 'pool_name' in rspool_columns: + query_fields.append('pool_name') + else: + query_fields.append("'unknown' as pool_name") + + if 'vs_id' in rspool_columns: + query_fields.append('vs_id') + else: + query_fields.append("0 as vs_id") + + if 'rspool_id' in rspool_columns: + query_fields.append('rspool_id') + else: + query_fields.append("0 as rspool_id") + + query = f"SELECT {', '.join(query_fields)} FROM IPv4RSPool" + cursor.execute(query) + + tag_count = 0 + + for row in cursor.fetchall(): + pool_name = row['pool_name'] + vs_id = row['vs_id'] + rspool_id = row['rspool_id'] + + # Get the VS info if VS table exists + vs_name = f"VS-{vs_id}" + try: + cursor.execute("SHOW TABLES LIKE 'VS'") + if cursor.fetchone(): + cursor.execute("SHOW COLUMNS FROM VS") + vs_columns = {col['Field']: True for col in cursor.fetchall()} + + if 'id' in vs_columns and 'name' in vs_columns: + cursor.execute(f"SELECT name FROM VS WHERE id = {vs_id}") + vs_result = cursor.fetchone() + if vs_result: + vs_name = vs_result['name'] + except Exception as e: + error_log(f"Error getting VS info: {str(e)}") + + # Create a tag for this pool + tag_name = f"LB-Pool-{pool_name}-{rspool_id}" + tag_slug = slugify(tag_name) + + try: + netbox.extras.create_tag( + name=tag_name, + slug=tag_slug, + color="9c27b0", + description=f"Load balancer pool: {pool_name}, VS: {vs_name}" + ) + tag_count += 1 + print(f"Created tag for load balancer pool {pool_name}") + except Exception as e: + error_log(f"Error creating tag for load balancer pool {pool_name}: {str(e)}") + + print(f"Created {tag_count} pool tags") + else: + print("IPv4RSPool table not found in database") + except Exception as e: + error_log(f"Error processing RS pools: {str(e)}") + print(f"Error processing RS pools: {str(e)}") + + print(f"Load balancing data migration completed. Updated {lb_count} IP addresses.") diff --git a/migration/extended/monitoring.py b/migration/extended/monitoring.py new file mode 100644 index 0000000..4bc9d0a --- /dev/null +++ b/migration/extended/monitoring.py @@ -0,0 +1,121 @@ +""" +Monitoring data migration functions +""" +import requests + +from racktables_netbox_migration.utils import error_log +from racktables_netbox_migration.config import NB_HOST, NB_PORT, NB_TOKEN, TARGET_SITE + +def migrate_monitoring(cursor, netbox): + """ + Migrate monitoring data from Racktables to NetBox + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating monitoring data...") + + # Get device names in target site if site filtering is enabled + site_device_names = set() + if TARGET_SITE: + print(f"Filtering monitoring data for site: {TARGET_SITE}") + site_devices = netbox.dcim.get_devices(site=TARGET_SITE) + site_device_names = set(device['name'] for device in site_devices) + + # Also include VMs in clusters at the target site + site_clusters = netbox.virtualization.get_clusters(site=TARGET_SITE) + for cluster in site_clusters: + cluster_vms = netbox.virtualization.get_virtual_machines(cluster_id=cluster['id']) + site_device_names.update(vm['name'] for vm in cluster_vms) + + # Get Cacti servers + cursor.execute("SELECT id, base_url FROM CactiServer") + cacti_servers = {} + + for server_id, base_url in cursor.fetchall(): + cacti_servers[server_id] = base_url + + # Get Cacti graphs associated with objects + cursor.execute(""" + SELECT CG.object_id, CG.server_id, CG.graph_id, CG.caption, OBJ.name, OBJ.objtype_id + FROM CactiGraph CG + JOIN Object OBJ ON CG.object_id = OBJ.id + """) + + monitor_count = 0 + for object_id, server_id, graph_id, caption, obj_name, objtype_id in cursor.fetchall(): + if not obj_name: + continue + + obj_name = obj_name.strip() + + # Skip if site filtering is enabled and device is not in target site + if TARGET_SITE and obj_name not in site_device_names: + continue + + # Determine if this is a VM or a device + is_vm = (objtype_id == 1504) # VM objtype_id + + # Find the object in NetBox + if is_vm: + objects = netbox.virtualization.get_virtual_machines(name=obj_name) + else: + objects = netbox.dcim.get_devices(name=obj_name) + + if not objects: + error_log(f"Could not find object {obj_name} to update monitoring data") + continue + + obj = objects[0] + + # Get the Cacti server base URL + base_url = cacti_servers.get(server_id, "") + + # Construct the monitoring URL if we have the base URL + monitoring_url = "" + if base_url and graph_id: + monitoring_url = f"{base_url.rstrip('/')}/graph_view.php?action=tree&select_first=true&graph_id={graph_id}" + + # Update the object with monitoring information + if is_vm: + url = f"http://{NB_HOST}:{NB_PORT}/api/virtualization/virtual-machines/{obj['id']}/" + else: + url = f"http://{NB_HOST}:{NB_PORT}/api/dcim/devices/{obj['id']}/" + + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + # Get current data + response = requests.get(url, headers=headers) + if response.status_code != 200: + error_log(f"Error getting object {obj_name}: {response.text}") + continue + + current_data = response.json() + + # Prepare data for update + data = { + "custom_fields": { + "Cacti_Server": base_url, + "Cacti_Graph_ID": str(graph_id), + "Monitoring_URL": monitoring_url + } + } + + # Update the custom fields of existing data + if 'custom_fields' in current_data and current_data['custom_fields']: + for key, value in current_data['custom_fields'].items(): + if key not in data['custom_fields'] and value: + data['custom_fields'][key] = value + + response = requests.patch(url, headers=headers, json=data) + if response.status_code in (200, 201): + monitor_count += 1 + print(f"Updated monitoring information for {obj_name}") + else: + error_log(f"Error updating monitoring for {obj_name}: {response.text}") + + print(f"Monitoring data migration completed. Updated {monitor_count} devices/VMs.") diff --git a/migration/extended/nat.py b/migration/extended/nat.py new file mode 100644 index 0000000..beed01b --- /dev/null +++ b/migration/extended/nat.py @@ -0,0 +1,136 @@ +""" +NAT mapping migration functions +""" +import ipaddress +import requests + +from racktables_netbox_migration.utils import error_log +from racktables_netbox_migration.config import NB_HOST, NB_PORT, NB_TOKEN, TARGET_SITE, IPV4_TAG + +def migrate_nat_mappings(cursor, netbox): + """ + Migrate NAT mapping data from Racktables to NetBox + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating NAT mappings...") + + # Get existing IP addresses from NetBox + existing_ips = {} + for ip in netbox.ipam.get_ip_addresses(): + existing_ips[ip['address']] = ip['id'] + + # Get NAT data from Racktables + cursor.execute(""" + SELECT proto, localip, localport, remoteip, remoteport, description + FROM IPv4NAT + """) + + nat_entries = cursor.fetchall() + nat_count = 0 + + for proto, localip, localport, remoteip, remoteport, description in nat_entries: + # Format IPs with CIDR notation + local_ip_cidr = f"{str(ipaddress.ip_address(localip))}/32" + remote_ip_cidr = f"{str(ipaddress.ip_address(remoteip))}/32" + + # If site filtering is enabled, check if these IPs are associated with devices in the target site + if TARGET_SITE: + # This would require additional lookup to check device associations + # Skip implementation for brevity as it would require complex queries + pass + + # Check if IPs exist in NetBox + if local_ip_cidr in existing_ips and remote_ip_cidr in existing_ips: + local_ip_id = existing_ips[local_ip_cidr] + remote_ip_id = existing_ips[remote_ip_cidr] + + # Update each IP with info about its NAT relationship + for ip_id, ip_cidr, nat_type, match_ip in [ + (local_ip_id, local_ip_cidr, "Source NAT" if localport else "Static NAT", remote_ip_cidr), + (remote_ip_id, remote_ip_cidr, "Destination NAT" if remoteport else "Static NAT", local_ip_cidr) + ]: + # Update IP with custom fields + url = f"http://{NB_HOST}:{NB_PORT}/api/ipam/ip-addresses/{ip_id}/" + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + # Get current data + response = requests.get(url, headers=headers) + if response.status_code != 200: + error_log(f"Error getting IP {ip_cidr}: {response.text}") + continue + + current_data = response.json() + + # Prepare port info if present + port_info = "" + if localport or remoteport: + port_info = f" (Port mapping: {localport or '*'} → {remoteport or '*'})" + + # Update description to include NAT info + description_text = current_data.get('description', '') + if description_text: + description_text += f"\nNAT: {description}" + else: + description_text = f"NAT: {description}" if description else "NAT mapping" + + data = { + "description": description_text[:200], + "custom_fields": { + "NAT_Type": nat_type, + "NAT_Match_IP": match_ip + port_info + } + } + + # Update the custom fields of existing data + if 'custom_fields' in current_data and current_data['custom_fields']: + for key, value in current_data['custom_fields'].items(): + if key not in data['custom_fields']: + data['custom_fields'][key] = value + + response = requests.patch(url, headers=headers, json=data) + if response.status_code in (200, 201): + nat_count += 1 + print(f"Updated NAT information for IP {ip_cidr}") + else: + error_log(f"Error updating NAT for IP {ip_cidr}: {response.text}") + else: + # Create IPs if they don't exist + for ip_int, ip_cidr, nat_type, match_ip_int, match_ip_cidr in [ + (localip, local_ip_cidr, "Source NAT" if localport else "Static NAT", remoteip, remote_ip_cidr), + (remoteip, remote_ip_cidr, "Destination NAT" if remoteport else "Static NAT", localip, local_ip_cidr) + ]: + if ip_cidr not in existing_ips: + # Check if IP exists in Racktables + cursor.execute("SELECT name FROM IPv4Address WHERE ip = %s", (ip_int,)) + ip_name = cursor.fetchone() + + port_info = "" + if localport or remoteport: + port_info = f" (Port mapping: {localport or '*'} → {remoteport or '*'})" + + # Create the IP address in NetBox + try: + new_ip = netbox.ipam.create_ip_address( + address=ip_cidr, + description=f"NAT: {description}" if description else "NAT mapping", + custom_fields={ + "IP_Name": ip_name[0] if ip_name else "", + "NAT_Type": nat_type, + "NAT_Match_IP": match_ip_cidr + port_info + }, + tags=[{'name': IPV4_TAG}] + ) + + existing_ips[ip_cidr] = new_ip['id'] + nat_count += 1 + print(f"Created IP {ip_cidr} with NAT information") + except Exception as e: + error_log(f"Error creating IP {ip_cidr}: {str(e)}") + + print(f"NAT mappings migration completed. Updated {nat_count} IP addresses.") diff --git a/migration/extended/patch_cables.py b/migration/extended/patch_cables.py new file mode 100644 index 0000000..dea8dca --- /dev/null +++ b/migration/extended/patch_cables.py @@ -0,0 +1,262 @@ +""" +Patch cable migration functions with comprehensive database and duplicate handling +""" +import requests +from slugify import slugify + +from migration.utils import pickleLoad, error_log +from migration.config import NB_HOST, NB_PORT, NB_TOKEN, TARGET_SITE + +def migrate_patch_cables(cursor, netbox): + """ + Migrate patch cable data from Racktables to NetBox with robust handling + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating patch cable data...") + + # First check if required tables exist + required_tables = ["PatchCableConnector", "PatchCableType", "Link", "PatchCableHeap"] + missing_tables = [] + + for table in required_tables: + try: + cursor.execute(f"SHOW TABLES LIKE '{table}'") + if not cursor.fetchone(): + missing_tables.append(table) + except Exception as e: + print(f"Error checking table {table}: {e}") + missing_tables.append(table) + + if missing_tables: + print(f"The following required tables are missing: {', '.join(missing_tables)}") + print("Cannot proceed with patch cable migration") + return + + # Flexible column detection function with additional logging + def get_column_name(table, preferred_columns): + try: + cursor.execute(f"SHOW COLUMNS FROM {table}") + columns = [column['Field'] for column in cursor.fetchall()] + + print(f"Available columns in {table}: {', '.join(columns)}") + + for pref_col in preferred_columns: + if pref_col in columns: + print(f"Selected column '{pref_col}' for {table}") + return pref_col + + # Use first column that has 'name' in it + for col in columns: + if 'name' in col.lower(): + print(f"Selected column '{col}' (contains 'name') for {table}") + return col + + # Fall back to first column + if columns: + print(f"Falling back to first column '{columns[0]}' for {table}") + return columns[0] + + print(f"No suitable column found for {table}") + return None + except Exception as e: + print(f"Error getting columns for {table}: {e}") + return None + + # Detect column names + connector_name_col = get_column_name('PatchCableConnector', + ['connector_name', 'name', 'type', 'label']) + type_name_col = get_column_name('PatchCableType', + ['pctype_name', 'name', 'type', 'label']) + + if not (connector_name_col and type_name_col): + print("Cannot proceed with patch cable migration due to schema issues") + return + + # Dictionary to map patch cable connector types and types + connector_types = {} + cable_types = {} + + # Load connector types with error handling + try: + cursor.execute(f"SELECT id, {connector_name_col} FROM PatchCableConnector") + for row in cursor.fetchall(): + connector_types[row['id']] = row[connector_name_col] + print(f"Loaded {len(connector_types)} connector types") + except Exception as e: + error_log(f"Error loading PatchCableConnector: {str(e)}") + print(f"Error loading connector types: {e}") + print("Continuing with empty connector types dictionary") + + # Load cable types with error handling + try: + cursor.execute(f"SELECT id, {type_name_col} FROM PatchCableType") + for row in cursor.fetchall(): + cable_types[row['id']] = row[type_name_col] + print(f"Loaded {len(cable_types)} cable types") + except Exception as e: + error_log(f"Error loading PatchCableType: {str(e)}") + print(f"Error loading cable types: {e}") + print("Continuing with empty cable types dictionary") + + # Site filtering + site_device_ids = [] + if TARGET_SITE: + site_devices = netbox.dcim.get_devices(site=TARGET_SITE) + site_device_ids = [device['id'] for device in site_devices] + + if not site_device_ids: + print("No devices found in the specified site, skipping patch cable migration") + return + + # Get existing cables to prevent duplicates + existing_cables = set() + for cable in netbox.dcim.get_cables(): + if cable['termination_a_type'] == 'dcim.interface' and cable['termination_b_type'] == 'dcim.interface': + # Create a unique identifier for the cable + cable_key = ( + min(cable['termination_a_id'], cable['termination_b_id']), + max(cable['termination_a_id'], cable['termination_b_id']) + ) + existing_cables.add(cable_key) + + # Check PatchCableHeap schema to determine field names + try: + cursor.execute("SHOW COLUMNS FROM PatchCableHeap") + pch_columns = {column['Field'].lower(): column['Field'] for column in cursor.fetchall()} + print(f"PatchCableHeap columns: {', '.join(pch_columns.keys())}") + except Exception as e: + error_log(f"Error getting PatchCableHeap schema: {str(e)}") + print(f"Error getting PatchCableHeap schema: {e}") + pch_columns = {} + + # Determine the correct field names + pctype_id_field = pch_columns.get('pctype_id', 'pctype_id') + end1_conn_id_field = pch_columns.get('end1_conn_id', 'end1_conn_id') + end2_conn_id_field = pch_columns.get('end2_conn_id', 'end2_conn_id') + length_field = pch_columns.get('length', 'length') + color_field = pch_columns.get('color', 'color') if 'color' in pch_columns else None + description_field = pch_columns.get('description', 'description') + + # Get connections from the Link table + try: + # Build query based on available columns + query = f""" + SELECT L.porta, L.portb, L.cable, C.{pctype_id_field}, + C.{end1_conn_id_field}, C.{end2_conn_id_field}, + C.{length_field}""" + + # Add color if it exists + if color_field: + query += f", C.{color_field}" + + # Add description if it exists + query += f", C.{description_field} FROM Link L JOIN PatchCableHeap C ON L.cable = C.id WHERE L.cable IS NOT NULL" + + cursor.execute(query) + link_connections = cursor.fetchall() + print(f"Found {len(link_connections)} cable connections") + except Exception as e: + error_log(f"Error querying Link table: {str(e)}") + print(f"Error querying Link table: {e}") + link_connections = [] + + connection_ids = pickleLoad('connection_ids', dict()) + cable_count = 0 + + for connection in link_connections: + try: + porta_id, portb_id, cable_id = connection['porta'], connection['portb'], connection['cable'] + + # Skip if interface IDs are not mapped + if porta_id not in connection_ids or portb_id not in connection_ids: + continue + + netbox_id_a = connection_ids[porta_id] + netbox_id_b = connection_ids[portb_id] + + # Site filtering check + if TARGET_SITE and (netbox_id_a not in site_device_ids and netbox_id_b not in site_device_ids): + continue + + # Create unique cable key + cable_key = (min(netbox_id_a, netbox_id_b), max(netbox_id_a, netbox_id_b)) + + # Skip if cable already exists + if cable_key in existing_cables: + continue + + # Extract cable details - handle schema differences + try: + pctype_id = connection[pctype_id_field] + end1_conn_id = connection[end1_conn_id_field] + end2_conn_id = connection[end2_conn_id_field] + length = connection[length_field] + color = connection[color_field] if color_field and color_field in connection else None + description = connection[description_field] + except (KeyError, IndexError): + # Fallback to numerical indices if column names don't match + pctype_id = connection.get(3, None) + end1_conn_id = connection.get(4, None) + end2_conn_id = connection.get(5, None) + length = connection.get(6, None) + color = connection.get(7, None) if color_field else None + description = connection.get(8 if color_field else 7, None) + + # Get cable type and connector details + cable_type = cable_types.get(pctype_id, "Unknown") if pctype_id else "Unknown" + connector_a = connector_types.get(end1_conn_id, "Unknown") if end1_conn_id else "Unknown" + connector_b = connector_types.get(end2_conn_id, "Unknown") if end2_conn_id else "Unknown" + + try: + # Create cable connection + cable = netbox.dcim.create_interface_connection( + netbox_id_a, + netbox_id_b, + 'dcim.interface', + 'dcim.interface', + label=f"{cable_type}-{color}" if color else cable_type, + color=color if color else None, + length=length if length else None, + length_unit="m", + description=description if description else None + ) + + # Update cable with custom fields + url = f"http://{NB_HOST}:{NB_PORT}/api/dcim/cables/{cable['id']}/" + headers = { + "Authorization": f"Token {NB_TOKEN}", + "Content-Type": "application/json" + } + + data = { + "custom_fields": { + "Patch_Cable_Type": cable_type, + "Patch_Cable_Connector_A": connector_a, + "Patch_Cable_Connector_B": connector_b, + "Cable_Color": color if color else "", + "Cable_Length": str(length) if length else "" + } + } + + response = requests.patch(url, headers=headers, json=data) + + if response.status_code in (200, 201): + cable_count += 1 + print(f"Created cable between interfaces {netbox_id_a} and {netbox_id_b}") + + # Mark as processed + existing_cables.add(cable_key) + else: + error_log(f"Error updating cable: {response.text}") + + except Exception as e: + error_log(f"Error creating cable connection: {str(e)}") + + except Exception as e: + error_log(f"Error processing connection: {str(e)}") + continue + + print(f"Completed patch cable migration. Created {cable_count} cables.") diff --git a/migration/extended/services.py b/migration/extended/services.py new file mode 100644 index 0000000..7d05f51 --- /dev/null +++ b/migration/extended/services.py @@ -0,0 +1,362 @@ +""" +Virtual services migration functions +""" +from migration.utils import error_log +from migration.config import TARGET_SITE + +def migrate_virtual_services(cursor, netbox): + """ + Migrate virtual services data from Racktables to NetBox + + Args: + cursor: Database cursor for Racktables + netbox: NetBox client instance + """ + print("\nMigrating virtual services...") + + # Check if VS table exists + try: + cursor.execute("SHOW TABLES LIKE 'VS'") + vs_exists = cursor.fetchone() is not None + + if not vs_exists: + print("VS table not found in database. Skipping virtual services migration.") + return + + # Get columns for VS table + cursor.execute("SHOW COLUMNS FROM VS") + vs_columns = [col['Field'] for col in cursor.fetchall()] + print(f"VS table columns: {', '.join(vs_columns)}") + + # Check for required columns + if 'vs_id' not in vs_columns: + print("VS table doesn't have 'vs_id' column. Looking for alternative primary key.") + # Look for potential primary key columns + primary_key = 'id' if 'id' in vs_columns else vs_columns[0] + print(f"Using {primary_key} as primary key for VS table") + else: + primary_key = 'vs_id' + + # Check if name column exists + if 'name' in vs_columns: + name_col = 'name' + else: + # Try to find a name-like column + name_cols = [col for col in vs_columns if 'name' in col.lower()] + if name_cols: + name_col = name_cols[0] + else: + name_col = vs_columns[1] if len(vs_columns) > 1 else None + + if not name_col: + print("No suitable name column found in VS table. Skipping virtual services migration.") + return + print(f"Using {name_col} as name column for VS table") + + # Check if description column exists + description_col = None + for col in vs_columns: + if 'description' in col.lower() or 'comment' in col.lower() or 'desc' in col.lower(): + description_col = col + break + + if description_col: + print(f"Using {description_col} as description column for VS table") + else: + print("No description column found in VS table. Using empty descriptions.") + + except Exception as e: + error_log(f"Database error checking VS table: {str(e)}") + print(f"Database error: {e}") + print("Skipping virtual services migration.") + return + + # Get device names in target site if site filtering is enabled + site_device_names = set() + if TARGET_SITE: + print(f"Filtering services for site: {TARGET_SITE}") + site_devices = netbox.dcim.get_devices(site=TARGET_SITE) + site_device_names = set(device['name'] for device in site_devices) + + # Also include VMs in clusters at the target site + site_clusters = netbox.virtualization.get_clusters(site=TARGET_SITE) + for cluster in site_clusters: + cluster_vms = netbox.virtualization.get_virtual_machines(cluster_id=cluster['id']) + site_device_names.update(vm['name'] for vm in cluster_vms) + + # Get existing services to avoid duplicates + existing_services = {} + for service in netbox.ipam.get_services(): + device_id = service.get('device_id') or service.get('virtual_machine_id') + if device_id: + key = f"{device_id}-{service['name']}-{','.join(map(str, service['ports']))}" + existing_services[key] = service['id'] + + # Get VS data from Racktables with dynamic column names + try: + query = f"SELECT {primary_key}, {name_col}" + if description_col: + query += f", {description_col}" + query += " FROM VS" + + cursor.execute(query) + vs_data = cursor.fetchall() + print(f"Found {len(vs_data)} virtual services") + except Exception as e: + error_log(f"Error querying VS table: {str(e)}") + print(f"Error querying VS table: {e}") + return + + # Check for VSEnabledIPs table or alternatives + vsenabled_exists = False + vsenabled_table = None + vs_id_col = None + ip_id_col = None + + try: + cursor.execute("SHOW TABLES LIKE 'VSEnabledIPs'") + if cursor.fetchone(): + vsenabled_exists = True + vsenabled_table = "VSEnabledIPs" + vs_id_col = "vs_id" + ip_id_col = "ip_id" + print("Found VSEnabledIPs table") + else: + # Look for alternative tables + cursor.execute("SHOW TABLES LIKE '%VS%IP%'") + alt_tables = [row[0] for row in cursor.fetchall()] + + if alt_tables: + print(f"Found alternative IP tables: {', '.join(alt_tables)}") + vsenabled_exists = True + vsenabled_table = alt_tables[0] + + # Get columns for this table + cursor.execute(f"SHOW COLUMNS FROM {vsenabled_table}") + vsenabled_columns = [col['Field'] for col in cursor.fetchall()] + print(f"{vsenabled_table} columns: {', '.join(vsenabled_columns)}") + + # Find vs_id-like column + vs_cols = [col for col in vsenabled_columns if 'vs' in col.lower() and ('id' in col.lower() or 'key' in col.lower())] + if vs_cols: + vs_id_col = vs_cols[0] + else: + vs_id_col = vsenabled_columns[0] + print(f"Using {vs_id_col} as VS ID column") + + # Find ip_id-like column + ip_cols = [col for col in vsenabled_columns if 'ip' in col.lower() and ('id' in col.lower() or 'key' in col.lower())] + if ip_cols: + ip_id_col = ip_cols[0] + else: + ip_id_col = vsenabled_columns[1] if len(vsenabled_columns) > 1 else None + + if not ip_id_col: + vsenabled_exists = False + print(f"Couldn't identify IP ID column in {vsenabled_table}. Skipping IP lookup.") + else: + print(f"Using {ip_id_col} as IP ID column") + else: + print("No suitable VS IP association tables found. Skipping IP lookup.") + except Exception as e: + error_log(f"Error checking VSEnabledIPs table: {str(e)}") + print(f"Error checking VSEnabledIPs table: {e}") + vsenabled_exists = False + + # Check for VSPorts table or alternatives + vsports_exists = False + vsports_table = None + vs_id_col_ports = None + port_name_col = None + + try: + cursor.execute("SHOW TABLES LIKE 'VSPorts'") + if cursor.fetchone(): + vsports_exists = True + vsports_table = "VSPorts" + vs_id_col_ports = "vs_id" + port_name_col = "port_name" + print("Found VSPorts table") + else: + # Look for alternative tables + cursor.execute("SHOW TABLES LIKE '%VS%Port%'") + alt_tables = [row[0] for row in cursor.fetchall()] + + if alt_tables: + print(f"Found alternative port tables: {', '.join(alt_tables)}") + vsports_exists = True + vsports_table = alt_tables[0] + + # Get columns for this table + cursor.execute(f"SHOW COLUMNS FROM {vsports_table}") + vsports_columns = [col['Field'] for col in cursor.fetchall()] + print(f"{vsports_table} columns: {', '.join(vsports_columns)}") + + # Find vs_id-like column + vs_cols = [col for col in vsports_columns if 'vs' in col.lower() and ('id' in col.lower() or 'key' in col.lower())] + if vs_cols: + vs_id_col_ports = vs_cols[0] + else: + vs_id_col_ports = vsports_columns[0] + print(f"Using {vs_id_col_ports} as VS ID column for ports") + + # Find port_name-like column + port_cols = [col for col in vsports_columns if 'port' in col.lower() and 'name' in col.lower()] + if port_cols: + port_name_col = port_cols[0] + else: + port_name_col = vsports_columns[1] if len(vsports_columns) > 1 else None + + if not port_name_col: + vsports_exists = False + print(f"Couldn't identify port name column in {vsports_table}. Will use default port (80).") + else: + print(f"Using {port_name_col} as port name column") + else: + print("No suitable VS port tables found. Will use default port (80).") + except Exception as e: + error_log(f"Error checking VSPorts table: {str(e)}") + print(f"Error checking VSPorts table: {e}") + vsports_exists = False + + service_count = 0 + + for vs_row in vs_data: + vs_id = vs_row[primary_key] + vs_name = vs_row[name_col] or f"Service-{vs_id}" + vs_description = vs_row[description_col] if description_col and description_col in vs_row else "" + + # Get the enabled IPs for this VS if available + vs_ips = [] + if vsenabled_exists: + try: + ip_query = f""" + SELECT IP.ip, IP.name, OBJ.name, OBJ.objtype_id + FROM {vsenabled_table} VS + JOIN IPv4Address IP ON VS.{ip_id_col} = IP.id + LEFT JOIN IPv4Allocation ALLOC ON IP.ip = ALLOC.ip + LEFT JOIN Object OBJ ON ALLOC.object_id = OBJ.id + WHERE VS.{vs_id_col} = %s + """ + cursor.execute(ip_query, (vs_id,)) + vs_ips = cursor.fetchall() + print(f"Found {len(vs_ips)} IP associations for VS {vs_id} ({vs_name})") + except Exception as e: + error_log(f"Error getting IPs for VS {vs_id}: {str(e)}") + print(f"Error getting IPs for VS {vs_id}: {e}") + + # Get the enabled ports for this VS if available + port_numbers = [] + if vsports_exists: + try: + port_query = f""" + SELECT {port_name_col} + FROM {vsports_table} + WHERE {vs_id_col_ports} = %s + """ + cursor.execute(port_query, (vs_id,)) + for port_row in cursor.fetchall(): + port_name = port_row[0] + try: + port_number = int(port_name) + port_numbers.append(port_number) + except (ValueError, TypeError): + # Try harder to find a port number + if isinstance(port_name, str): + # Extract numbers from string + import re + matches = re.findall(r'\d+', port_name) + if matches: + port_numbers.append(int(matches[0])) + except Exception as e: + error_log(f"Error getting ports for VS {vs_id}: {str(e)}") + print(f"Error getting ports for VS {vs_id}: {e}") + + if not port_numbers: + # Default port if none specified + port_numbers = [80] + + # Default protocol to TCP if we don't have specific info + protocol = "tcp" + + # Create a service for each associated device or VM + if vs_ips: + for ip_row in vs_ips: + ip = ip_row[0] + ip_name = ip_row[1] + obj_name = ip_row[2] + objtype_id = ip_row[3] + + if not obj_name: + continue + + obj_name = obj_name.strip() + + # Skip if site filtering is enabled and device is not in target site + if TARGET_SITE and obj_name not in site_device_names: + continue + + # Determine if this is a VM or a device + is_vm = (objtype_id == 1504) # VM objtype_id + + # Create a unique service name including IP info + service_name = f"{vs_name}-{ip_name}" if ip_name else vs_name + + # Skip if service already exists + service_key = "" + if is_vm: + vm = netbox.virtualization.get_virtual_machines(name=obj_name) + if vm: + service_key = f"{vm[0]['id']}-{service_name}-{','.join(map(str, port_numbers))}" + if service_key in existing_services: + continue + else: + device = netbox.dcim.get_devices(name=obj_name) + if device: + service_key = f"{device[0]['id']}-{service_name}-{','.join(map(str, port_numbers))}" + if service_key in existing_services: + continue + + try: + # Create the service + if is_vm: + vm = netbox.virtualization.get_virtual_machines(name=obj_name) + if vm: + service = netbox.virtualization.create_service( + virtual_machine=obj_name, + name=service_name, + ports=port_numbers, + protocol=protocol, + description=vs_description[:200] if vs_description else "", + custom_fields={ + "VS_Enabled": True, + "VS_Type": "Virtual Service", + "VS_Protocol": protocol + } + ) + service_count += 1 + print(f"Created service {service_name} for VM {obj_name}") + else: + device = netbox.dcim.get_devices(name=obj_name) + if device: + service = netbox.ipam.create_service( + device=obj_name, + name=service_name, + ports=port_numbers, + protocol=protocol, + description=vs_description[:200] if vs_description else "", + custom_fields={ + "VS_Enabled": True, + "VS_Type": "Virtual Service", + "VS_Protocol": protocol + } + ) + service_count += 1 + print(f"Created service {service_name} for device {obj_name}") + except Exception as e: + error_log(f"Error creating service {service_name}: {str(e)}") + else: + # If no IPs found, create a service with the VS name only + print(f"No IP associations found for VS {vs_id} ({vs_name}). Skipping service creation.") + + print(f"Virtual services migration completed. Created {service_count} services.") diff --git a/migration/interfaces.py b/migration/interfaces.py new file mode 100644 index 0000000..12702e7 --- /dev/null +++ b/migration/interfaces.py @@ -0,0 +1,239 @@ +""" +Interface creation and management functions +""" +import time + +from racktables_netbox_migration.utils import ( + get_db_connection, get_cursor, pickleLoad, pickleDump, error_log +) +from racktables_netbox_migration.db import change_interface_name +from racktables_netbox_migration.config import TARGET_SITE + +def get_interfaces(netbox): + """ + Retrieve all interfaces from NetBox with pagination + + This function retrieves interfaces from NetBox using pagination to handle + large numbers of interfaces. It caches the results to avoid repeated queries. + + Args: + netbox: NetBox client instance + + Returns: + list: A list of interface objects + """ + interfaces = [] + interfaces_file = "interfaces" + + # First try to load cached interfaces + cached_interfaces = pickleLoad(interfaces_file, []) + if cached_interfaces: + print(f"Loaded {len(cached_interfaces)} interfaces from cache") + return cached_interfaces + + print("Fetching interfaces from NetBox...") + limit = 500 + offset = 0 + + try: + while True: + ret = netbox.dcim.get_interfaces_custom(limit=limit, offset=offset) + if not ret: + # No more interfaces to fetch + break + + interfaces.extend(ret) + offset += limit + print(f"Added {len(ret)} interfaces, total {len(interfaces)}") + except Exception as e: + error_log(f"Error retrieving interfaces: {str(e)}") + print(f"Error retrieving interfaces: {str(e)}") + + print(f"Total interfaces fetched: {len(interfaces)}") + + # Cache the result for later use + pickleDump(interfaces_file, interfaces) + return interfaces + +def create_interfaces(netbox): + """ + Create interfaces for devices in NetBox + + Args: + netbox: NetBox client instance + """ + print("Creating interfaces for devices") + + # Load device data + global_physical_object_ids = pickleLoad("global_physical_object_ids", set()) + global_non_physical_object_ids = pickleLoad("global_non_physical_object_ids", set()) + + # Filter devices by site if site filtering is enabled + if TARGET_SITE: + site_devices = set(device['id'] for device in netbox.dcim.get_devices(site=TARGET_SITE)) + filtered_physical = [] + for device_name, racktables_id, netbox_id, objtype_id in global_physical_object_ids: + if netbox_id in site_devices: + filtered_physical.append((device_name, racktables_id, netbox_id, objtype_id)) + global_physical_object_ids = filtered_physical + + # Get existing interfaces to avoid duplicates + print("Getting existing interfaces") + start_time = time.time() + + interface_local_names_for_device = {} + interface_netbox_ids_for_device = {} + + for value in get_interfaces(netbox): + device_id = value['device']['id'] + + if device_id not in interface_local_names_for_device: + interface_local_names_for_device[device_id] = set() + + interface_local_names_for_device[device_id].add(value['name']) + + if device_id not in interface_netbox_ids_for_device: + interface_netbox_ids_for_device[device_id] = {} + + interface_netbox_ids_for_device[device_id][value['name']] = value['id'] + + print(f"Got existing interfaces in {time.time() - start_time:.2f} seconds") + + # Get port types from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id,oif_name FROM PortOuterInterface") + port_outer_interfaces = {row["id"]: row["oif_name"] for row in cursor.fetchall()} + + # Store the SQL id and the netbox interface id for later connections + connection_ids = {} + + # Create interfaces for physical and non-physical devices + interface_counter = 0 + for device_list in (global_physical_object_ids, global_non_physical_object_ids): + for device_name, racktables_object_id, netbox_id, objtype_id in device_list: + # Get ports from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute( + "SELECT id,name,iif_id,type,label FROM Port WHERE object_id=%s", + (racktables_object_id,) + ) + ports = cursor.fetchall() + + # Initialize tracking for this device + if netbox_id not in interface_local_names_for_device: + interface_local_names_for_device[netbox_id] = set() + + if netbox_id not in interface_netbox_ids_for_device: + interface_netbox_ids_for_device[netbox_id] = {} + + # Process each port + for port in ports: + Id, interface_name, iif_id, Type, label = port["id"], port["name"], port["iif_id"], port["type"], port["label"] + + # Skip if no interface name + if not interface_name: + continue + + # Get port type + port_outer_interface = port_outer_interfaces.get(Type, "Other") + + # Standardize interface name + interface_name = change_interface_name(interface_name, objtype_id) + + # Skip if interface already exists + if interface_name in interface_local_names_for_device[netbox_id]: + print(f"Interface {interface_name} already exists on {device_name}") + + # Link racktables interface id to netbox interface id + connection_ids[Id] = interface_netbox_ids_for_device[netbox_id][interface_name] + continue + + # Create the interface + try: + added_interface = netbox.dcim.create_interface( + name=interface_name, + interface_type="other", + device_id=netbox_id, + custom_fields={"Device_Interface_Type": port_outer_interface}, + label=label[:200] if label else "" + ) + + # Track created interface + interface_local_names_for_device[netbox_id].add(interface_name) + interface_netbox_ids_for_device[netbox_id][interface_name] = added_interface['id'] + + # Link racktables interface id to netbox interface id + connection_ids[Id] = added_interface['id'] + + interface_counter += 1 + if interface_counter % 500 == 0: + print(f"Created {interface_counter} interfaces") + + except Exception as e: + error_log(f"Error creating interface {interface_name} on {device_name}: {str(e)}") + + # Save connection IDs for creating connections + pickleDump('connection_ids', connection_ids) + print(f"Created {interface_counter} interfaces") + +def create_interface_connections(netbox): + """ + Create connections between interfaces in NetBox + + Args: + netbox: NetBox client instance + """ + print("Creating interface connections") + + # Load connection IDs mapping + connection_ids = pickleLoad('connection_ids', dict()) + + # Get connections from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT porta,portb,cable FROM Link") + connections = cursor.fetchall() + + # Track completed connections + connection_count = 0 + + # Create the connections + for connection in connections: + interface_a, interface_b, cable = connection["porta"], connection["portb"], connection["cable"] + + # Skip if either interface is missing + if interface_a not in connection_ids: + print(f"Interface A (ID: {interface_a}) not found in connection mapping") + continue + + if interface_b not in connection_ids: + print(f"Interface B (ID: {interface_b}) not found in connection mapping") + continue + + # Get NetBox interface IDs + netbox_id_a = connection_ids[interface_a] + netbox_id_b = connection_ids[interface_b] + + # Skip if site filtering is enabled and interfaces are not in target site + if TARGET_SITE: + # This would require additional checks to get the devices for these interfaces + # Implement if needed + pass + + # Create the connection + try: + netbox.dcim.create_interface_connection( + netbox_id_a, + netbox_id_b, + 'dcim.interface', + 'dcim.interface' + ) + connection_count += 1 + if connection_count % 100 == 0: + print(f"Created {connection_count} connections") + except Exception as e: + error_log(f"Error creating connection between {netbox_id_a} and {netbox_id_b}: {str(e)}") + + print(f"Created {connection_count} interface connections") diff --git a/migration/ips.py b/migration/ips.py new file mode 100644 index 0000000..f4ca62d --- /dev/null +++ b/migration/ips.py @@ -0,0 +1,441 @@ +""" +IP-related migration functions +""" +import ipaddress +import random + +from migration.utils import ( + get_db_connection, get_cursor, pickleLoad, pickleDump, + format_prefix_description +) +from migration.db import getTags, change_interface_name +from migration.config import IPV4_TAG, IPV6_TAG, TARGET_TENANT_ID, TARGET_SITE, TARGET_SITE_ID + +def create_ip_networks(netbox, IP, target_site=None): + """ + Create IP networks (prefixes) from Racktables in NetBox + + Args: + netbox: NetBox client instance + IP: "4" for IPv4 or "6" for IPv6 + target_site: Optional site name for filtering + """ + print(f"\nCreating IPv{IP} Networks") + + # Import the status and association helpers + from migration.netbox_status import get_valid_status_choices, determine_prefix_status + from migration.site_tenant import get_site_tenant_params + + # Get valid status choices for prefixes in this NetBox instance + valid_statuses = get_valid_status_choices(netbox, 'prefix') + print(f"Valid prefix statuses in your NetBox: {', '.join(valid_statuses)}") + + # Get site and tenant parameters + association_params = get_site_tenant_params() + + # Load mapping of network IDs to VLAN info + network_id_group_name_id = pickleLoad('network_id_group_name_id', dict()) + + # Get existing prefixes to avoid duplicates + existing_prefixes = set(prefix['prefix'] for prefix in netbox.ipam.get_ip_prefixes()) + + # Retrieve networks from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute(f"SELECT id,ip,mask,name,comment FROM IPv{IP}Network") + ipv46Networks = cursor.fetchall() + + # Track created prefixes for debug information + created_count = 0 + skipped_count = 0 + status_counts = {status: 0 for status in valid_statuses} + + for network in ipv46Networks: + Id, ip, mask, prefix_name, comment = network["id"], network["ip"], network["mask"], network["name"], network["comment"] + + # Skip the single IP addresses + if (IP == "4" and mask == 32) or (IP == "6" and mask == 128): + continue + + prefix = str(ipaddress.ip_address(ip)) + "/" + str(mask) + + if prefix in existing_prefixes: + skipped_count += 1 + continue + + # Get VLAN info if associated + if Id in network_id_group_name_id: + vlan_name = network_id_group_name_id[Id][1] + vlan_id = network_id_group_name_id[Id][2] + else: + vlan_name = None + vlan_id = None + + # Get tags for this network + tags = getTags(f"ipv{IP}net", Id) + + # Use the improved status determination logic + status = determine_prefix_status(prefix_name, comment, valid_statuses) + status_counts[status] += 1 + + # Format description to include tags and prefix name + description = format_prefix_description(prefix_name, tags, comment) + + # Create the prefix in NetBox + try: + # Prepare all parameters + params = { + 'prefix': prefix, + 'status': status, + 'description': description, + 'vlan': {"id": vlan_id} if vlan_name else None, + 'custom_fields': {'Prefix_Name': prefix_name}, + 'tags': [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}] + tags + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the prefix with all parameters + netbox.ipam.create_ip_prefix(**params) + created_count += 1 + print(f"Created {prefix} - {prefix_name} with status '{status}'") + except Exception as e: + print(f"Error creating {prefix}: {e}") + + # Print final summary of statuses assigned + print(f"IPv{IP} Networks: Created {created_count}, Skipped {skipped_count}") + print("Status assignments:") + for status, count in status_counts.items(): + if count > 0: + print(f" - {status}: {count}") + +def create_ip_allocated(netbox, IP, target_site=None): + """ + Create allocated IP addresses from Racktables in NetBox + + Args: + netbox: NetBox client instance + IP: "4" for IPv4 or "6" for IPv6 + target_site: Optional site name for filtering + """ + print(f"Creating allocated IPv{IP} Addresses") + + # Import the association helper + from migration.site_tenant import get_site_tenant_params + + # Get site and tenant parameters + association_params = get_site_tenant_params() + + # Get existing IPs to avoid duplicates + existing_ips = set(ip['address'] for ip in netbox.ipam.get_ip_addresses()) + + # Get IP names and comments + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute(f"SELECT ip,name,comment FROM IPv{IP}Address") + ip_addresses = cursor.fetchall() + ip_names_comments = dict([(row["ip"], (row["name"], row["comment"])) for row in ip_addresses]) + + # Get IP allocations (associations with devices) + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute(f""" + SELECT ALO.object_id, ALO.ip, ALO.name, ALO.type, OBJ.objtype_id, OBJ.name + FROM IPv{IP}Allocation ALO, Object OBJ + WHERE OBJ.id=ALO.object_id + """) + ip_allocations = cursor.fetchall() + + # Filter by site if site filtering is enabled + site_devices = set() + site_vms = set() + + if target_site: + # First, try to get the site by exact name + site_obj = None + try: + # Get site to determine its ID + sites = list(netbox.dcim.get_sites(name=target_site)) + if sites: + site_obj = sites[0] + site_id = site_obj['id'] + print(f"Found site '{target_site}' with ID: {site_id}") + else: + # Try a case-insensitive search as fallback + all_sites = list(netbox.dcim.get_sites()) + for site in all_sites: + if site['name'].lower() == target_site.lower(): + site_obj = site + site_id = site['id'] + print(f"Found site '{site['name']}' with ID: {site_id} (case-insensitive match)") + break + + if not site_obj: + print(f"Warning: Could not find site '{target_site}'. IP filtering by site will be skipped.") + except Exception as e: + print(f"Error getting site '{target_site}': {e}") + print("IP filtering by site will be skipped.") + + # If we found the site, filter devices and VMs by that site + if site_obj: + try: + # Use the site ID for filtering + site_id = site_obj['id'] + site_devices = set(device['name'] for device in netbox.dcim.get_devices(site_id=site_id)) + print(f"Found {len(site_devices)} devices in site '{site_obj['name']}'") + + # Get VMs in clusters at the target site + site_clusters = netbox.virtualization.get_clusters(site_id=site_id) + for cluster in site_clusters: + cluster_vms = netbox.virtualization.get_virtual_machines(cluster_id=cluster['id']) + site_vms.update(vm['name'] for vm in cluster_vms) + + print(f"Found {len(site_vms)} VMs in site '{site_obj['name']}'") + + # Filter allocations + filtered_allocations = [] + for allocation in ip_allocations: + device_name = allocation["OBJ.name"].strip() if allocation["OBJ.name"] else "" + if device_name in site_devices or device_name in site_vms: + filtered_allocations.append(allocation) + + ip_allocations = filtered_allocations + print(f"Filtered to {len(ip_allocations)} IP allocations for site '{site_obj['name']}'") + except Exception as e: + print(f"Error filtering by site: {e}") + print("Proceeding with all IP allocations.") + + # Process each IP allocation + created_count = 0 + skipped_count = 0 + + for allocation in ip_allocations: + object_id = allocation["object_id"] + ip = allocation["ip"] + interface_name = allocation["name"] + ip_type = allocation["type"] + objtype_id = allocation["objtype_id"] + device_name = allocation["OBJ.name"] + + # Get IP name and comment if available + if ip in ip_names_comments: + ip_name, comment = ip_names_comments[ip] + else: + ip_name, comment = "", "" + + # Skip if device name is missing + if not device_name: + continue + + device_name = device_name.strip() + + # Format the IP address WITHOUT CIDR notation - CHANGED THIS LINE + string_ip = str(ipaddress.ip_address(ip)) + + # Skip if already exists (unless shared IP) + existing_match = False + for existing_ip in existing_ips: + if existing_ip.startswith(string_ip + "/") or existing_ip == string_ip: + existing_match = True + break + + if existing_match and ip_type != "shared": + skipped_count += 1 + continue + + # Set VRRP role if shared IP + use_vrrp_role = "vrrp" if ip_type == "shared" else None + + # Standardize interface name + if interface_name: + interface_name = change_interface_name(interface_name, objtype_id) + else: + interface_name = f"no_RT_name{random.randint(0, 99999)}" + + # Determine if device is VM or physical device + if objtype_id == 1504: # VM + device_or_vm = "vm" + interface_list = netbox.virtualization.get_interfaces(virtual_machine=device_name) + else: + device_or_vm = "device" + interface_list = netbox.dcim.get_interfaces(device=device_name) + + # Try to find matching interface + device_contained_same_interface = False + for name, interface_id in [(interface['name'], interface['id']) for interface in interface_list]: + if interface_name == name: + # Add IP to existing interface + try: + # Prepare all parameters + params = { + 'address': string_ip, + 'role': use_vrrp_role, + 'assigned_object': {'device' if device_or_vm == "device" else "virtual_machine": device_name}, + 'interface_type': "virtual", + 'assigned_object_type': "dcim.interface" if device_or_vm == "device" else "virtualization.vminterface", + 'assigned_object_id': interface_id, + 'description': comment[:200] if comment else "", + 'custom_fields': {'IP_Name': ip_name, 'Interface_Name': interface_name, 'IP_Type': ip_type}, + 'tags': [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}] + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the IP address with all parameters + netbox.ipam.create_ip_address(**params) + device_contained_same_interface = True + created_count += 1 + print(f"Created IP {string_ip} on {device_name}/{interface_name}") + break + except Exception as e: + print(f"Error creating IP {string_ip} on {device_name}/{interface_name}: {e}") + + # If no matching interface found, create a new virtual interface + if not device_contained_same_interface: + # Find the device ID by name + device_id = None + try: + if device_or_vm == "device": + device_results = list(netbox.dcim.get_devices(name=device_name)) + if device_results: + device_id = device_results[0]['id'] + else: + # Try with case-insensitive search + all_devices = list(netbox.dcim.get_devices()) + for dev in all_devices: + if dev['name'].lower() == device_name.lower(): + device_id = dev['id'] + device_name = dev['name'] # Use the actual name from NetBox + break + else: + vm_results = list(netbox.virtualization.get_virtual_machines(name=device_name)) + if vm_results: + device_id = vm_results[0]['id'] + else: + # Try with case-insensitive search + all_vms = list(netbox.virtualization.get_virtual_machines()) + for vm in all_vms: + if vm['name'].lower() == device_name.lower(): + device_id = vm['id'] + device_name = vm['name'] # Use the actual name from NetBox + break + except Exception as e: + print(f"Error finding device/VM {device_name}: {e}") + + if not device_id: + print(f"Could not find device/VM {device_name} - skipping IP {string_ip}") + continue + + try: + # Create a new virtual interface with site and tenant parameters + if device_or_vm == "device": + interface_params = { + 'name': interface_name, + 'interface_type': "virtual", + 'device_id': device_id, + 'custom_fields': {"Device_Interface_Type": "Virtual"} + } + interface_params.update(association_params) + added_interface = netbox.dcim.create_interface(**interface_params) + else: + interface_params = { + 'name': interface_name, + 'interface_type': "virtual", + 'virtual_machine': device_id, + 'custom_fields': {"VM_Interface_Type": "Virtual"} + } + interface_params.update(association_params) + added_interface = netbox.virtualization.create_interface(**interface_params) + + # Add IP to the new interface + ip_params = { + 'address': string_ip, + 'role': use_vrrp_role, + 'assigned_object_id': added_interface['id'], + 'assigned_object': {"device" if device_or_vm == "device" else "virtual_machine": device_id}, + 'interface_type': "virtual", + 'assigned_object_type': "dcim.interface" if device_or_vm == "device" else "virtualization.vminterface", + 'description': comment[:200] if comment else "", + 'custom_fields': {'IP_Name': ip_name, 'Interface_Name': interface_name, 'IP_Type': ip_type}, + 'tags': [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}] + } + ip_params.update(association_params) + netbox.ipam.create_ip_address(**ip_params) + created_count += 1 + print(f"Created new interface {interface_name} with IP {string_ip} on {device_name}") + except Exception as e: + print(f"Error creating interface or IP: {e}") + + print(f"Allocated IPs: Created {created_count}, Skipped {skipped_count}") + +def create_ip_not_allocated(netbox, IP, target_site=None): + """ + Create non-allocated IP addresses from Racktables in NetBox + + Args: + netbox: NetBox client instance + IP: "4" for IPv4 or "6" for IPv6 + target_site: Optional site name for filtering + """ + print(f"Creating non-allocated IPv{IP} Addresses") + + # Import the association helper + from migration.site_tenant import get_site_tenant_params + + # Get site and tenant parameters + association_params = get_site_tenant_params() + + # Get existing IPs to avoid duplicates + existing_ips = set(ip['address'] for ip in netbox.ipam.get_ip_addresses()) + + # Get IP names and comments + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute(f"SELECT ip,name,comment FROM IPv{IP}Address") + ip_addresses = cursor.fetchall() + + created_count = 0 + skipped_count = 0 + + for ip_data in ip_addresses: + ip = ip_data["ip"] + ip_name = ip_data["name"] + comment = ip_data["comment"] + + # Format the IP address WITHOUT CIDR notation - CHANGED THIS LINE + string_ip = str(ipaddress.ip_address(ip)) + + # Skip if already exists + existing_match = False + for existing_ip in existing_ips: + if existing_ip.startswith(string_ip + "/") or existing_ip == string_ip: + existing_match = True + break + + if existing_match: + skipped_count += 1 + continue + + # Create the IP address in NetBox + try: + # Prepare all parameters + params = { + 'address': string_ip, + 'description': comment[:200] if comment else "", + 'custom_fields': {'IP_Name': ip_name}, + 'tags': [{'name': IPV4_TAG if IP == "4" else IPV6_TAG}] + } + + # Add site and tenant parameters + params.update(association_params) + + # Create the IP address with all parameters + netbox.ipam.create_ip_address(**params) + created_count += 1 + print(f"Created non-allocated IP {string_ip}") + except Exception as e: + print(f"Error creating IP {string_ip}: {e}") + + print(f"Non-allocated IPs: Created {created_count}, Skipped {skipped_count}") diff --git a/migration/migrate.py b/migration/migrate.py new file mode 100644 index 0000000..848b285 --- /dev/null +++ b/migration/migrate.py @@ -0,0 +1,617 @@ +#!/usr/bin/env python3 +""" +Unified migration script for Racktables to NetBox +""" + +import os +import sys +import argparse +import importlib.util +import logging +from datetime import datetime + +# Add parent directory to path to allow running directly +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +ROOT_DIR = os.path.dirname(SCRIPT_DIR) +sys.path.insert(0, ROOT_DIR) + +# Define BASE_DIR for custom fields setup +BASE_DIR = os.path.dirname(SCRIPT_DIR) + +# Import core modules +from migration.config import * +from migration.utils import * +from migration.db import * +from migration.custom_netbox import NetBox + +def check_config(): + """Verify configuration is not using defaults""" + default_token = "0123456789abcdef0123456789abcdef01234567" + if NB_TOKEN == default_token: + logging.error("Default API token detected in config.py") + logging.error("Please update migration/config.py with your NetBox configuration") + return False + + if DB_CONFIG['password'] == 'secure-password': + logging.error("Default database password detected in config.py") + logging.error("Please update migration/config.py with your database credentials") + return False + + if NB_HOST == "localhost" and NB_PORT == 8000: + logging.warning("Using default NetBox connection settings (localhost:8000)") + logging.warning("If this is not your actual NetBox server, update migration/config.py") + + return True + +def parse_arguments(): + """Parse command line arguments""" + parser = argparse.ArgumentParser(description='Migrate data from Racktables to NetBox') + parser.add_argument('--site', type=str, help='Target site name to restrict migration to') + parser.add_argument('--tenant', type=str, help='Target tenant name to restrict migration to') + parser.add_argument('--config', type=str, help='Path to custom configuration file') + parser.add_argument('--basic-only', action='store_true', help='Run only basic migration (no extended components)') + parser.add_argument('--extended-only', action='store_true', help='Run only extended migration components') + parser.add_argument('--skip-custom-fields', action='store_true', help='Skip setting up custom fields') + return parser.parse_args() + +def create_helper_modules(): + """Create required helper modules if they don't exist""" + # Create directory if it doesn't exist + os.makedirs(os.path.join(os.path.dirname(os.path.abspath(__file__)), "migration"), exist_ok=True) + + # Create netbox_status.py module + netbox_status_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "migration", "netbox_status.py") + if not os.path.exists(netbox_status_path): + with open(netbox_status_path, 'w') as f: + f.write("""\"\"\" +Helper module to determine valid NetBox statuses across versions +Can be imported by other modules to ensure consistent status handling +\"\"\" +import requests +import logging +from migration.config import NB_HOST, NB_PORT, NB_TOKEN, NB_USE_SSL + +# Cache for valid status choices +_valid_status_choices = { + 'prefix': None, + 'ip_address': None +} + +def get_valid_status_choices(netbox, object_type): + \"\"\" + Get valid status choices for a specific object type in NetBox + + Args: + netbox: NetBox client instance + object_type: Type of object to get status choices for (e.g., 'prefix') + + Returns: + list: List of valid status choices + \"\"\" + global _valid_status_choices + + # Return cached choices if available + if _valid_status_choices[object_type]: + return _valid_status_choices[object_type] + + # API endpoints for different object types + endpoints = { + 'prefix': 'ipam/prefixes', + 'ip_address': 'ipam/ip-addresses' + } + + # Determine URL based on object type + if object_type not in endpoints: + logging.error(f"Invalid object type: {object_type}") + return ['active'] # Default fallback + + protocol = "https" if NB_USE_SSL else "http" + url = f"{protocol}://{NB_HOST}:{NB_PORT}/api/{endpoints[object_type]}/choices/" + + try: + headers = {"Authorization": f"Token {NB_TOKEN}"} + response = requests.get(url, headers=headers, verify=NB_USE_SSL) + + if response.status_code == 200: + choices_data = response.json() + # Extract status choices from the response + status_choices = [] + + # Different NetBox versions have different response formats + if 'status' in choices_data: + # Newer NetBox versions + status_choices = [choice[0] for choice in choices_data['status']] + elif 'choices' in choices_data and 'status' in choices_data['choices']: + # Older NetBox versions + status_choices = [choice[0] for choice in choices_data['choices']['status']] + + if status_choices: + # Cache the results + _valid_status_choices[object_type] = status_choices + print(f"Valid {object_type} status choices: {', '.join(status_choices)}") + return status_choices + + logging.error(f"Failed to get status choices for {object_type}: {response.status_code}") + except Exception as e: + logging.error(f"Error getting status choices for {object_type}: {str(e)}") + + # Default fallback for common statuses + fallback = ['active', 'container', 'reserved'] + _valid_status_choices[object_type] = fallback + return fallback + +def determine_prefix_status(prefix_name, comment, valid_statuses=None): + \"\"\" + Determine the appropriate NetBox status for a prefix based on its name and comments + + Args: + prefix_name: Name of the prefix from Racktables + comment: Comment for the prefix from Racktables + valid_statuses: List of valid status choices in NetBox + + Returns: + str: Most appropriate status for the prefix + \"\"\" + # Use default statuses if none provided + if valid_statuses is None: + valid_statuses = ['active', 'container', 'reserved', 'deprecated'] + + # Default to 'container' or first valid status if name/comment are empty + if (not prefix_name or prefix_name.strip() == "") and (not comment or comment.strip() == ""): + # For empty prefixes, use container (if available) or first valid status + return 'container' if 'container' in valid_statuses else valid_statuses[0] + + # Determine status based on content patterns + lower_name = prefix_name.lower() if prefix_name else "" + lower_comment = comment.lower() if comment else "" + + # Check for hints that the prefix is specifically reserved + if any(term in lower_name or term in lower_comment for term in + ['reserved', 'hold', 'future', 'planned']): + return 'reserved' if 'reserved' in valid_statuses else 'active' + + # Check for hints that the prefix is deprecated + if any(term in lower_name or term in lower_comment for term in + ['deprecated', 'obsolete', 'old', 'inactive', 'decommissioned']): + return 'deprecated' if 'deprecated' in valid_statuses else 'active' + + # Check for specific hints that the prefix should be a container + if any(term in lower_name or term in lower_comment for term in + ['container', 'parent', 'supernet', 'aggregate']): + return 'container' if 'container' in valid_statuses else 'active' + + # Check for hints that this is available/unused space + if any(term in lower_name or term in lower_comment for term in + ['available', 'unused', 'free', '[here be dragons', '[create network here]', 'unallocated']): + return 'container' if 'container' in valid_statuses else 'active' + + # Check for hints that this is actively used + if any(term in lower_name or term in lower_comment for term in + ['in use', 'used', 'active', 'production', 'allocated']): + return 'active' if 'active' in valid_statuses else valid_statuses[0] + + # When we can't clearly determine from the content, default to 'active' for anything with a name/comment + # This assumes that if someone took the time to name it, it's likely in use + return 'active' if 'active' in valid_statuses else valid_statuses[0] +""") + + # Create site_tenant.py module + site_tenant_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "migration", "site_tenant.py") + if not os.path.exists(site_tenant_path): + with open(site_tenant_path, 'w') as f: + f.write("""\"\"\" +Add site and tenant associations to all NetBox objects +\"\"\" +import os +import sys +import logging +from slugify import slugify + +def ensure_site_tenant_associations(netbox, site_name, tenant_name): + \"\"\" + Ensures that site and tenant IDs are properly retrieved and set globally + + Args: + netbox: NetBox client instance + site_name: Site name to use + tenant_name: Tenant name to use + + Returns: + tuple: (site_id, tenant_id) or (None, None) if not available + \"\"\" + # Set up logging to capture detailed information + logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s', + handlers=[ + logging.FileHandler("association_debug.log"), + logging.StreamHandler(sys.stdout) + ] + ) + + site_id = None + tenant_id = None + + # Handle site association + if site_name: + logging.info(f"Looking up site: {site_name}") + try: + sites = list(netbox.dcim.get_sites(name=site_name)) + if sites: + site = sites[0] + # Extract ID based on available format (could be property or dict key) + site_id = site.id if hasattr(site, 'id') else site.get('id') + logging.info(f"Found site '{site_name}' with ID: {site_id}") + else: + # Try to create the site if it doesn't exist + logging.info(f"Site '{site_name}' not found, creating it...") + try: + new_site = netbox.dcim.create_site(site_name, slugify(site_name)) + site_id = new_site.id if hasattr(new_site, 'id') else new_site.get('id') + logging.info(f"Created site '{site_name}' with ID: {site_id}") + except Exception as e: + logging.error(f"Failed to create site '{site_name}': {str(e)}") + except Exception as e: + logging.error(f"Error looking up site '{site_name}': {str(e)}") + + # Handle tenant association + if tenant_name: + logging.info(f"Looking up tenant: {tenant_name}") + try: + tenants = list(netbox.tenancy.get_tenants(name=tenant_name)) + if tenants: + tenant = tenants[0] + # Extract ID based on available format (could be property or dict key) + tenant_id = tenant.id if hasattr(tenant, 'id') else tenant.get('id') + logging.info(f"Found tenant '{tenant_name}' with ID: {tenant_id}") + else: + # Try to create the tenant if it doesn't exist + logging.info(f"Tenant '{tenant_name}' not found, creating it...") + try: + new_tenant = netbox.tenancy.create_tenant(tenant_name, slugify(tenant_name)) + tenant_id = new_tenant.id if hasattr(new_tenant, 'id') else new_tenant.get('id') + logging.info(f"Created tenant '{tenant_name}' with ID: {tenant_id}") + except Exception as e: + logging.error(f"Failed to create tenant '{tenant_name}': {str(e)}") + except Exception as e: + logging.error(f"Error looking up tenant '{tenant_name}': {str(e)}") + + # Save to environment variables for consistent access + if site_id: + os.environ['NETBOX_SITE_ID'] = str(site_id) + if tenant_id: + os.environ['NETBOX_TENANT_ID'] = str(tenant_id) + + return site_id, tenant_id + +def get_site_tenant_params(): + \"\"\" + Get site and tenant parameters for API calls + + Returns: + dict: Parameters for site and tenant to be passed to API calls + \"\"\" + params = {} + + # Get site ID from environment or global variable + site_id = os.environ.get('NETBOX_SITE_ID') + if site_id: + params['site'] = site_id + + # Get tenant ID from environment or global variable + tenant_id = os.environ.get('NETBOX_TENANT_ID') + if tenant_id: + params['tenant'] = tenant_id + + return params +""") + +def verify_site_exists(netbox, site_name): + """Verify that the specified site exists in NetBox and create a matching tag""" + global TARGET_SITE_ID # Global declaration must come first + + if not site_name: + return True + + sites = list(netbox.dcim.get_sites(name=site_name)) + if sites: + print(f"Target site '{site_name}' found - restricting migration to this site") + + # Create a tag with the same name as the site + from migration.utils import create_global_tags + create_global_tags(netbox, [site_name]) + print(f"Created tag '{site_name}' to match site name") + + # Store the site ID in the global config + TARGET_SITE_ID = sites[0].id if hasattr(sites[0], 'id') else sites[0]['id'] + print(f"Using site ID: {TARGET_SITE_ID}") + + return True + else: + # Create the site if it doesn't exist + try: + from slugify import slugify + print(f"Target site '{site_name}' not found in NetBox, creating it...") + new_site = netbox.dcim.create_site(site_name, slugify(site_name)) + + # Store the site ID in the global config + TARGET_SITE_ID = new_site.id if hasattr(new_site, 'id') else new_site['id'] + print(f"Created site '{site_name}' with ID: {TARGET_SITE_ID}") + + # Create a tag with the same name as the site + from migration.utils import create_global_tags + create_global_tags(netbox, [site_name]) + print(f"Created tag '{site_name}' to match site name") + + return True + except Exception as e: + print(f"ERROR: Failed to create site '{site_name}': {e}") + return False + +def verify_tenant_exists(netbox, tenant_name): + """Verify that the specified tenant exists in NetBox and create a matching tag""" + global TARGET_TENANT_ID # Global declaration must come first + + if not tenant_name: + return True + + tenants = list(netbox.tenancy.get_tenants(name=tenant_name)) + if tenants: + print(f"Target tenant '{tenant_name}' found - restricting migration to this tenant") + + # Create a tag with the same name as the tenant + from migration.utils import create_global_tags + create_global_tags(netbox, [tenant_name]) + print(f"Created tag '{tenant_name}' to match tenant name") + + # Store the tenant ID in the global config + TARGET_TENANT_ID = tenants[0].id if hasattr(tenants[0], 'id') else tenants[0]['id'] + print(f"Using tenant ID: {TARGET_TENANT_ID}") + + return True + else: + # Create the tenant if it doesn't exist + try: + from slugify import slugify + print(f"Target tenant '{tenant_name}' not found in NetBox, creating it...") + new_tenant = netbox.tenancy.create_tenant(tenant_name, slugify(tenant_name)) + + # Store the tenant ID in the global config + TARGET_TENANT_ID = new_tenant.id if hasattr(new_tenant, 'id') else new_tenant['id'] + print(f"Created tenant '{tenant_name}' with ID: {TARGET_TENANT_ID}") + + # Create a tag with the same name as the tenant + from migration.utils import create_global_tags + create_global_tags(netbox, [tenant_name]) + print(f"Created tag '{tenant_name}' to match tenant name") + + return True + except Exception as e: + print(f"ERROR: Failed to create tenant '{tenant_name}': {e}") + return False + +def setup_custom_fields(): + """Run custom fields setup script""" + try: + script_path = os.path.join(BASE_DIR, "migration", "set_custom_fields.py") + spec = importlib.util.spec_from_file_location("set_custom_fields", script_path) + custom_fields = importlib.util.module_from_spec(spec) + spec.loader.exec_module(custom_fields) + custom_fields.main() + return True + except Exception as e: + print(f"Error setting up custom fields: {e}") + return False + +def run_base_migration(netbox): + """Run the basic migration components""" + # Create standard tags + global_tags = set(tag['name'] for tag in netbox.extras.get_tags()) + create_global_tags(netbox, (IPV4_TAG, IPV6_TAG)) + + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT tag FROM TagTree") + create_global_tags(netbox, (row["tag"] for row in cursor.fetchall())) + + print("Created tags") + + # Process components according to flags + if CREATE_VLAN_GROUPS: + import migration.vlans as vlans + vlans.create_vlan_groups(netbox) + + if CREATE_VLANS: + import migration.vlans as vlans + vlans.create_vlans(netbox) + + if CREATE_MOUNTED_VMS or CREATE_UNMOUNTED_VMS: + import migration.vms as vms + vms.create_vms(netbox, CREATE_MOUNTED_VMS, CREATE_UNMOUNTED_VMS) + + if CREATE_RACKED_DEVICES: + import migration.devices as devices + import migration.sites as sites + sites.create_sites_and_racks(netbox) + devices.create_racked_devices(netbox) + + if CREATE_NON_RACKED_DEVICES: + import migration.devices as devices + devices.create_non_racked_devices(netbox) + + if CREATE_INTERFACES: + import migration.interfaces as interfaces + interfaces.create_interfaces(netbox) + + if CREATE_INTERFACE_CONNECTIONS: + import migration.interfaces as interfaces + interfaces.create_interface_connections(netbox) + + if CREATE_IPV4 or CREATE_IPV6: + import migration.ips as ips + versions = [] + if CREATE_IPV4: + versions.append("4") + if CREATE_IPV6: + versions.append("6") + + for IP in versions: + if CREATE_IP_NETWORKS: + ips.create_ip_networks(netbox, IP, TARGET_SITE) + + if CREATE_IP_ALLOCATED: + ips.create_ip_allocated(netbox, IP, TARGET_SITE) + + if CREATE_IP_NOT_ALLOCATED: + ips.create_ip_not_allocated(netbox, IP, TARGET_SITE) + + print("Base migration completed successfully!") + return True + +def run_extended_migration(netbox): + """Run the additional migration components""" + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + if CREATE_PATCH_CABLES: + from migration.extended.patch_cables import migrate_patch_cables + migrate_patch_cables(cursor, netbox) + + if CREATE_FILES: + from migration.extended.files import migrate_files + migrate_files(cursor, netbox) + + if CREATE_VIRTUAL_SERVICES: + from migration.extended.services import migrate_virtual_services + migrate_virtual_services(cursor, netbox) + + if CREATE_NAT_MAPPINGS: + from migration.extended.nat import migrate_nat_mappings + migrate_nat_mappings(cursor, netbox) + + if CREATE_LOAD_BALANCING: + from migration.extended.load_balancer import migrate_load_balancing + migrate_load_balancing(cursor, netbox) + + if CREATE_MONITORING_DATA: + from migration.extended.monitoring import migrate_monitoring + migrate_monitoring(cursor, netbox) + + # Create available subnets + if CREATE_AVAILABLE_SUBNETS: + # First use the API-based approach to get accurate available prefixes + from migration.extended.available_subnets import create_available_prefixes + create_available_prefixes(netbox) + + # Then use the algorithmic approach as a fallback + from migration.extended.available_subnets import create_available_subnets + create_available_subnets(netbox) + + # Generate IP ranges based on imported IP data + if CREATE_IP_RANGES: + # First create IP ranges from API-detected available prefixes + from migration.extended.ip_ranges import create_ip_ranges_from_available_prefixes + create_ip_ranges_from_available_prefixes(netbox) + + # Then create ranges from algorithmic detection + from migration.extended.ip_ranges import create_ip_ranges + create_ip_ranges(netbox) + + print("Extended migration completed successfully!") + return True + +def main(): + """Main migration function""" + # Parse command line arguments + args = parse_arguments() + + # Set up logging + log_filename = f"migration_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log" + logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s', + handlers=[ + logging.FileHandler(log_filename), + logging.StreamHandler(sys.stdout) + ] + ) + + # Set target site if specified + if args.site: + global TARGET_SITE + TARGET_SITE = args.site + logging.info(f"Filtering migration for site: {TARGET_SITE}") + + # Set target tenant if specified + if args.tenant: + global TARGET_TENANT + TARGET_TENANT = args.tenant + logging.info(f"Filtering migration for tenant: {TARGET_TENANT}") + + # Create required helper modules + create_helper_modules() + + # Load custom config if specified + if args.config: + if os.path.exists(args.config): + try: + exec(open(args.config).read()) + logging.info(f"Loaded custom configuration from {args.config}") + except Exception as e: + logging.error(f"Error loading config: {e}") + return False + else: + logging.error(f"Config file not found: {args.config}") + return False + + # Verify configuration is not using defaults + if not check_config(): + return False + + # Attempt database connection + try: + logging.info("Testing database connection...") + with get_db_connection() as connection: + logging.info("Database connection successful") + except Exception as e: + logging.error(f"Database connection failed: {e}") + return False + + # Set up custom fields if not skipped + if not args.skip_custom_fields: + logging.info("Setting up custom fields...") + if not setup_custom_fields(): + logging.warning("Custom fields setup had errors. Continuing with migration...") + + # Initialize NetBox connection + logging.info("Initializing NetBox connection...") + try: + netbox = NetBox(host=NB_HOST, port=NB_PORT, use_ssl=NB_USE_SSL, auth_token=NB_TOKEN) + except Exception as e: + logging.error(f"Failed to initialize NetBox connection: {e}") + return False + + # Ensure site and tenant associations are set up + from migration.site_tenant import ensure_site_tenant_associations + ensure_site_tenant_associations(netbox, TARGET_SITE, TARGET_TENANT) + + # Run migrations based on arguments + success = True + + if not args.extended_only: + logging.info("Starting base migration...") + success = run_base_migration(netbox) and success + + if not args.basic_only: + logging.info("Starting extended migration...") + success = run_extended_migration(netbox) and success + + if success: + logging.info("Migration completed successfully!") + else: + logging.error("Migration completed with errors. Check log for details.") + + return success + +if __name__ == "__main__": + success = main() + sys.exit(0 if success else 1) diff --git a/migration/netbox_status.py b/migration/netbox_status.py new file mode 100644 index 0000000..440a8e7 --- /dev/null +++ b/migration/netbox_status.py @@ -0,0 +1,156 @@ +""" +Helper module to determine valid NetBox statuses across versions +Can be imported by other modules to ensure consistent status handling +""" +import requests +import logging +from migration.config import NB_HOST, NB_PORT, NB_TOKEN, NB_USE_SSL + +# Cache for valid status choices +_valid_status_choices = { + 'prefix': None, + 'ip_address': None +} + +def get_valid_status_choices(netbox, object_type): + """ + Get valid status choices for a specific object type in NetBox + + Args: + netbox: NetBox client instance + object_type: Type of object to get status choices for (e.g., 'prefix') + + Returns: + list: List of valid status choices + """ + global _valid_status_choices + + # Return cached choices if available + if _valid_status_choices[object_type]: + return _valid_status_choices[object_type] + + # API endpoints for different object types + endpoints = { + 'prefix': 'ipam/prefixes', + 'ip_address': 'ipam/ip-addresses' + } + + # Determine URL based on object type + if object_type not in endpoints: + logging.error(f"Invalid object type: {object_type}") + return ['active'] # Default fallback + + protocol = "https" if NB_USE_SSL else "http" + headers = {"Authorization": f"Token {NB_TOKEN}"} + + # DIRECT APPROACH: Get real objects and read their status structure + try: + # First try to get a site as reference - sites almost always exist + site_endpoint = f"{protocol}://{NB_HOST}:{NB_PORT}/api/dcim/sites/" + response = requests.get(site_endpoint, headers=headers, verify=NB_USE_SSL, params={"limit": 1}) + + if response.status_code == 200: + data = response.json() + if "results" in data and len(data["results"]) > 0: + site = data["results"][0] + if "status" in site and isinstance(site["status"], dict): + # Modern NetBox format with value and label + print(f"Found NetBox using dictionary status format") + + # Check if we can get actual objects of requested type + obj_endpoint = f"{protocol}://{NB_HOST}:{NB_PORT}/api/{endpoints[object_type]}/" + obj_response = requests.get(obj_endpoint, headers=headers, verify=NB_USE_SSL, params={"limit": 10}) + + if obj_response.status_code == 200: + obj_data = obj_response.json() + if "results" in obj_data and len(obj_data["results"]) > 0: + # Extract all unique status values from objects + statuses = [] + for obj in obj_data["results"]: + if "status" in obj and isinstance(obj["status"], dict): + status_value = obj["status"].get("value") + if status_value and status_value not in statuses: + statuses.append(status_value) + + if statuses: + print(f"Found actual status values for {object_type}: {', '.join(statuses)}") + _valid_status_choices[object_type] = statuses + # Make sure we have common statuses + for common_status in ['active', 'reserved', 'deprecated', 'container']: + if common_status not in statuses: + statuses.append(common_status) + return statuses + + # Fall back to using site status value as reference + site_status = site["status"]["value"] + print(f"Using site status '{site_status}' as reference") + statuses = ['active', 'reserved', 'deprecated', 'container'] + if site_status not in statuses: + statuses.append(site_status) + _valid_status_choices[object_type] = statuses + return statuses + except Exception as e: + logging.error(f"Error in direct status detection: {str(e)}") + + # Final fallback with standard values + fallback = ['active', 'container', 'reserved', 'deprecated'] + print(f"Using fallback status choices: {', '.join(fallback)}") + _valid_status_choices[object_type] = fallback + return fallback + +def determine_prefix_status(prefix_name, comment, valid_statuses=None): + """ + Determine the appropriate NetBox status for a prefix based on its name and comments + + Args: + prefix_name: Name of the prefix from Racktables + comment: Comment for the prefix from Racktables + valid_statuses: List of valid status choices in NetBox + + Returns: + str: Most appropriate status for the prefix + """ + # Use default statuses if none provided + if valid_statuses is None: + valid_statuses = ['active', 'container', 'reserved', 'deprecated'] + + # Default to 'active' if available, otherwise first valid status + default_status = 'active' if 'active' in valid_statuses else valid_statuses[0] + + # Default to 'reserved' if name/comment are empty + if (not prefix_name or prefix_name.strip() == "") and (not comment or comment.strip() == ""): + # For empty prefixes, use reserved (if available) or first valid status + return 'reserved' if 'reserved' in valid_statuses else default_status + + # Determine status based on content patterns + lower_name = prefix_name.lower() if prefix_name else "" + lower_comment = comment.lower() if comment else "" + + # Check for hints that the prefix is specifically reserved + if any(term in lower_name or term in lower_comment for term in + ['reserved', 'hold', 'future', 'planned']): + return 'reserved' if 'reserved' in valid_statuses else default_status + + # Check for hints that the prefix is deprecated + if any(term in lower_name or term in lower_comment for term in + ['deprecated', 'obsolete', 'old', 'inactive', 'decommissioned']): + return 'deprecated' if 'deprecated' in valid_statuses else default_status + + # Check for specific hints that the prefix should be a container + if any(term in lower_name or term in lower_comment for term in + ['container', 'parent', 'supernet', 'aggregate']): + return 'container' if 'container' in valid_statuses else default_status + + # Check for hints that this is available/unused space + if any(term in lower_name or term in lower_comment for term in + ['available', 'unused', 'free', '[here be dragons', '[create network here]', 'unallocated']): + return 'container' if 'container' in valid_statuses else default_status + + # Check for hints that this is actively used + if any(term in lower_name or term in lower_comment for term in + ['in use', 'used', 'active', 'production', 'allocated']): + return 'active' if 'active' in valid_statuses else default_status + + # When we can't clearly determine from the content, default to 'active' for anything with a name/comment + # This assumes that if someone took the time to name it, it's likely in use + return default_status diff --git a/migration/set_custom_fields.py b/migration/set_custom_fields.py new file mode 100644 index 0000000..2e0d197 --- /dev/null +++ b/migration/set_custom_fields.py @@ -0,0 +1,285 @@ +#!/usr/bin/env python3 +""" +Extended custom fields script for Racktables to NetBox migration +Includes support for additional Racktables tables not covered in original migration +""" + +import requests +import json +import time +import sys +import os + +# Define BASE_DIR +BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + +# Import configuration from config.py +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +from migration.config import NB_HOST, NB_PORT, NB_TOKEN, NB_USE_SSL + +# Construct API URL and token from config.py +API_URL = f"{'https' if NB_USE_SSL else 'http'}://{NB_HOST}" +if NB_PORT: + API_URL = f"{API_URL}:{NB_PORT}" +API_TOKEN = NB_TOKEN + +# Prepare headers for API requests +HEADERS = { + "Authorization": f"Token {API_TOKEN}", + "Content-Type": "application/json", + "Accept": "application/json" +} + +# Check if config appears to be default values +def check_config(): + default_token = "0123456789abcdef0123456789abcdef01234567" + if NB_TOKEN == default_token: + print("ERROR: Default API token detected in config.py.") + print("Please update migration/config.py with your actual NetBox configuration.") + print("You need to set NB_TOKEN to your actual NetBox API token.") + return False + + if NB_HOST == "localhost" and NB_PORT == 8000: + print("WARNING: Using default NetBox connection settings (localhost:8000).") + print("If this is not your actual NetBox server, update migration/config.py.") + + return True + +# Function to create a custom field +def create_custom_field(name, field_type, object_types, description="", required=False, weight=0, label=None): + """Create a custom field using the NetBox API with correct format for 4.2.6""" + + # Convert single string to list if needed + if isinstance(object_types, str): + object_types = [object_types] + + # Prepare the payload + payload = { + "name": name, + "type": field_type, + "object_types": object_types, + "description": description, + "required": required, + "weight": weight + } + + # Add label if provided + if label: + payload["label"] = label + + # Send the request + print(f"Creating custom field: {name} for {', '.join(object_types)}") + try: + response = requests.post( + f"{API_URL}/api/extras/custom-fields/", + headers=HEADERS, + data=json.dumps(payload), + timeout=10 + ) + + # Check the response + if response.status_code in (201, 200): + print(f"✓ Created custom field: {name}") + return True + else: + print(f"✗ Failed to create custom field: {name}") + print(f" Status code: {response.status_code}") + print(f" Response: {response.text}") + return False + except requests.exceptions.RequestException as e: + print(f"✗ Connection error: {str(e)}") + return False + +# Original custom fields (keeping these) +original_custom_fields = [ + # VLAN Group custom fields + {"name": "VLAN_Domain_ID", "type": "text", "object_types": ["ipam.vlangroup"], + "description": "ID for VLAN Domain", "required": True}, + + # Prefix custom fields + {"name": "Prefix_Name", "type": "text", "object_types": ["ipam.prefix"], + "description": "Name for prefix"}, + + # Device custom fields + {"name": "Device_Label", "type": "text", "object_types": ["dcim.device"], + "description": "Label for device"}, + + # VM custom fields + {"name": "VM_Asset_No", "type": "text", "object_types": ["virtualization.virtualmachine"], + "description": "Asset number for VMs"}, + {"name": "VM_Label", "type": "text", "object_types": ["virtualization.virtualmachine"], + "description": "Label for VMs"}, + + # VM Interface custom fields + {"name": "VM_Interface_Type", "type": "text", "object_types": ["virtualization.vminterface"], + "description": "Enter type for VM interface", "required": True, "label": "Custom type for VM interfaces"}, + + # Device Interface custom fields + {"name": "Device_Interface_Type", "type": "text", "object_types": ["dcim.interface"], + "description": "Enter type for interface", "required": True, "label": "Custom type for interfaces"}, + + # IP Address custom fields + {"name": "IP_Type", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Type of ip", "label": "Type"}, + {"name": "IP_Name", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Name of ip", "label": "Name"}, + {"name": "Interface_Name", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Name of interface for this IP", "label": "Interface Name"}, + + # Additional device custom fields + {"name": "OEM_SN_1", "type": "text", "object_types": ["dcim.device"]}, + {"name": "HW_type", "type": "text", "object_types": ["dcim.device"]}, + {"name": "FQDN", "type": "text", "object_types": ["dcim.device"]}, + {"name": "SW_type", "type": "text", "object_types": ["dcim.device"]}, + {"name": "SW_version", "type": "text", "object_types": ["dcim.device"]}, + {"name": "number_of_ports", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "max_current_Ampers", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "power_load_percents", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "max_power_Watts", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "contact_person", "type": "text", "object_types": ["dcim.device"]}, + {"name": "flash_memory_MB", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "DRAM_MB", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "CPU_MHz", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "OEM_SN_2", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Support_Contract_Expiration", "type": "text", "object_types": ["dcim.device"]}, + {"name": "HW_warranty_expiration", "type": "text", "object_types": ["dcim.device"]}, + {"name": "SW_warranty_expiration", "type": "text", "object_types": ["dcim.device"]}, + {"name": "UUID", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Hypervisor", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Height_units", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "Slot_number", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Sort_order", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "Mgmt_type", "type": "text", "object_types": ["dcim.device"]}, + {"name": "base_MAC_address", "type": "text", "object_types": ["dcim.device"]}, + {"name": "RAM_MB", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "Processor", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Total_Disk_GB", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "Processor_Count", "type": "integer", "object_types": ["dcim.device"]}, + {"name": "Service_Tag", "type": "text", "object_types": ["dcim.device"]}, + {"name": "PDU", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Circuit", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Contract_Number", "type": "text", "object_types": ["dcim.device"]}, + {"name": "DSP_Slot_1_Serial", "type": "text", "object_types": ["dcim.device"]}, + {"name": "DSP_Slot_2_Serial", "type": "text", "object_types": ["dcim.device"]}, + {"name": "DSP_Slot_3_Serial", "type": "text", "object_types": ["dcim.device"]}, + {"name": "DSP_Slot_4_Serial", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Chassis_Serial", "type": "text", "object_types": ["dcim.device"]}, + {"name": "SBC_PO", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Chassis_Model", "type": "text", "object_types": ["dcim.device"]}, + {"name": "Application_SW_Version", "type": "text", "object_types": ["dcim.device"]}, + {"name": "RHVM_URL", "type": "text", "object_types": ["dcim.device"]}, + {"name": "TIPC_NETID", "type": "text", "object_types": ["dcim.device"]}, + {"name": "CE_IP_Active", "type": "text", "object_types": ["dcim.device"]}, + {"name": "CE_IP_Standby", "type": "text", "object_types": ["dcim.device"]}, + {"name": "GPU_Serial_Number_1", "type": "text", "object_types": ["dcim.device"]}, + {"name": "GPU_Serial_Number_2", "type": "text", "object_types": ["dcim.device"]}, +] + +# New custom fields for additional tables +new_custom_fields = [ + # Cable Management custom fields for dcim.cable + {"name": "Patch_Cable_Type", "type": "text", "object_types": ["dcim.cable"], + "description": "Type of patch cable from Racktables"}, + {"name": "Patch_Cable_Connector_A", "type": "text", "object_types": ["dcim.cable"], + "description": "A-side connector type"}, + {"name": "Patch_Cable_Connector_B", "type": "text", "object_types": ["dcim.cable"], + "description": "B-side connector type"}, + {"name": "Cable_Color", "type": "text", "object_types": ["dcim.cable"], + "description": "Color of the cable"}, + {"name": "Cable_Length", "type": "text", "object_types": ["dcim.cable"], + "description": "Length of the cable"}, + + # Virtual Services custom fields + {"name": "VS_Enabled", "type": "boolean", "object_types": ["ipam.service"], + "description": "Virtual service is enabled"}, + {"name": "VS_Type", "type": "text", "object_types": ["ipam.service"], + "description": "Type of virtual service"}, + {"name": "VS_Protocol", "type": "text", "object_types": ["ipam.service"], + "description": "Protocol used by virtual service"}, + + # NAT & Load Balancing custom fields for IP Addresses + {"name": "NAT_Type", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Type of NAT (SNAT, DNAT, etc.)"}, + {"name": "NAT_Match_IP", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Matching IP for NAT relationship"}, + {"name": "LB_Config", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Load balancer configuration"}, + {"name": "LB_Pool", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Load balancer pool membership"}, + {"name": "RS_Pool", "type": "text", "object_types": ["ipam.ipaddress"], + "description": "Real server pool"}, + + # Monitoring custom fields for devices + {"name": "Cacti_Server", "type": "text", "object_types": ["dcim.device", "virtualization.virtualmachine"], + "description": "Cacti server monitoring this device"}, + {"name": "Cacti_Graph_ID", "type": "text", "object_types": ["dcim.device", "virtualization.virtualmachine"], + "description": "ID of Cacti graph for this device"}, + {"name": "Monitoring_URL", "type": "text", "object_types": ["dcim.device", "virtualization.virtualmachine"], + "description": "URL to monitoring system for this device"}, + + # Attachment custom fields + {"name": "File_References", "type": "text", "object_types": ["dcim.device", "virtualization.virtualmachine"], + "description": "References to attached files from Racktables"}, + {"name": "File_Description", "type": "text", "object_types": ["extras.objectchange"], + "description": "Description of attached file"} +] + +def main(): + """Main function to create custom fields""" + # Verify configuration + if not check_config(): + return False + + # Combine all custom fields + all_custom_fields = original_custom_fields + new_custom_fields + + print(f"Creating {len(all_custom_fields)} custom fields in NetBox...") + + success_count = 0 + failure_count = 0 + + for field in all_custom_fields: + success = create_custom_field( + field["name"], + field["type"], + field["object_types"], + field.get("description", ""), + field.get("required", False), + field.get("weight", 0), + field.get("label") + ) + + if success: + success_count += 1 + else: + failure_count += 1 + + # Add a short delay to avoid rate limiting + time.sleep(0.5) + + print(f"\nSummary:") + print(f"- Successfully created: {success_count}") + print(f"- Failed to create: {failure_count}") + + # Check MAX_PAGE_SIZE setting + print("\nChecking MAX_PAGE_SIZE setting...") + try: + response = requests.get(f"{API_URL}/api/users/config/", headers=HEADERS) + if response.status_code == 200: + config = response.json() + if 'MAX_PAGE_SIZE' in config and config['MAX_PAGE_SIZE'] == 0: + print("✓ MAX_PAGE_SIZE is already set to 0") + else: + print("i MAX_PAGE_SIZE needs to be set to 0 manually") + print(" Edit the netbox.env file and add: MAX_PAGE_SIZE=0") + print(" Then restart NetBox: docker-compose restart netbox") + else: + print(f"✗ Failed to check MAX_PAGE_SIZE setting") + print(f" Status code: {response.status_code}") + if response.text: + print(f" Response: {response.text}") + except requests.exceptions.RequestException as e: + print(f"✗ Failed to check MAX_PAGE_SIZE setting: {str(e)}") + +if __name__ == "__main__": + main() diff --git a/migration/site_tenant.py b/migration/site_tenant.py new file mode 100644 index 0000000..5f86ebd --- /dev/null +++ b/migration/site_tenant.py @@ -0,0 +1,105 @@ +""" +Add site and tenant associations to all NetBox objects +""" +import os +import sys +import logging +from slugify import slugify + +def ensure_site_tenant_associations(netbox, site_name, tenant_name): + """ + Ensures that site and tenant IDs are properly retrieved and set globally + + Args: + netbox: NetBox client instance + site_name: Site name to use + tenant_name: Tenant name to use + + Returns: + tuple: (site_id, tenant_id) or (None, None) if not available + """ + # Set up logging to capture detailed information + logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s', + handlers=[ + logging.FileHandler("association_debug.log"), + logging.StreamHandler(sys.stdout) + ] + ) + + site_id = None + tenant_id = None + + # Handle site association + if site_name: + logging.info(f"Looking up site: {site_name}") + try: + sites = list(netbox.dcim.get_sites(name=site_name)) + if sites: + site = sites[0] + # Extract ID based on available format (could be property or dict key) + site_id = site.id if hasattr(site, 'id') else site.get('id') + logging.info(f"Found site '{site_name}' with ID: {site_id}") + else: + # Try to create the site if it doesn't exist + logging.info(f"Site '{site_name}' not found, creating it...") + try: + new_site = netbox.dcim.create_site(site_name, slugify(site_name)) + site_id = new_site.id if hasattr(new_site, 'id') else new_site.get('id') + logging.info(f"Created site '{site_name}' with ID: {site_id}") + except Exception as e: + logging.error(f"Failed to create site '{site_name}': {str(e)}") + except Exception as e: + logging.error(f"Error looking up site '{site_name}': {str(e)}") + + # Handle tenant association + if tenant_name: + logging.info(f"Looking up tenant: {tenant_name}") + try: + tenants = list(netbox.tenancy.get_tenants(name=tenant_name)) + if tenants: + tenant = tenants[0] + # Extract ID based on available format (could be property or dict key) + tenant_id = tenant.id if hasattr(tenant, 'id') else tenant.get('id') + logging.info(f"Found tenant '{tenant_name}' with ID: {tenant_id}") + else: + # Try to create the tenant if it doesn't exist + logging.info(f"Tenant '{tenant_name}' not found, creating it...") + try: + new_tenant = netbox.tenancy.create_tenant(tenant_name, slugify(tenant_name)) + tenant_id = new_tenant.id if hasattr(new_tenant, 'id') else new_tenant.get('id') + logging.info(f"Created tenant '{tenant_name}' with ID: {tenant_id}") + except Exception as e: + logging.error(f"Failed to create tenant '{tenant_name}': {str(e)}") + except Exception as e: + logging.error(f"Error looking up tenant '{tenant_name}': {str(e)}") + + # Save to environment variables for consistent access + if site_id: + os.environ['NETBOX_SITE_ID'] = str(site_id) + if tenant_id: + os.environ['NETBOX_TENANT_ID'] = str(tenant_id) + + return site_id, tenant_id + +def get_site_tenant_params(): + """ + Get site and tenant parameters for API calls + + Returns: + dict: Parameters for site and tenant to be passed to API calls + """ + params = {} + + # Get site ID from environment or global variable + site_id = os.environ.get('NETBOX_SITE_ID') + if site_id: + params['site'] = site_id + + # Get tenant ID from environment or global variable + tenant_id = os.environ.get('NETBOX_TENANT_ID') + if tenant_id: + params['tenant'] = tenant_id + + return params diff --git a/migration/sites.py b/migration/sites.py new file mode 100644 index 0000000..1f22d73 --- /dev/null +++ b/migration/sites.py @@ -0,0 +1,116 @@ +""" +Site and rack related functions for the Racktables to NetBox migration +""" +from slugify import slugify + +from racktables_netbox_migration.utils import get_db_connection, get_cursor +from racktables_netbox_migration.db import ( + getRowsAtSite, getRacksAtRow, getAtomsAtRack, getRackHeight, getTags +) +from racktables_netbox_migration.config import ( + SITE_NAME_LENGTH_THRESHOLD, TARGET_SITE, TARGET_TENANT, TARGET_TENANT_ID +) + +def create_sites_and_racks(netbox): + """ + Create sites, rows, and racks from Racktables in NetBox + + Args: + netbox: NetBox client instance + """ + print("Creating sites, rows, and racks") + + # Skip if site filtering is enabled - only process the target site + if TARGET_SITE: + existing_sites = netbox.dcim.get_sites(name=TARGET_SITE) + if not existing_sites: + print(f"Target site '{TARGET_SITE}' not found in NetBox") + return + + print(f"Site filtering enabled - only processing target site: {TARGET_SITE}") + sites_to_process = [(site['id'], site['name'], '', '', '') for site in existing_sites] + else: + # Get all locations from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id, name, label, asset_no, comment FROM Object WHERE objtype_id=1562") + sites_to_process = cursor.fetchall() + + for site_data in sites_to_process: + site_id = site_data["id"] if isinstance(site_data, dict) else site_data[0] + site_name = site_data["name"] if isinstance(site_data, dict) else site_data[1] + site_label = site_data["label"] if isinstance(site_data, dict) else site_data[2] + site_asset_no = site_data["asset_no"] if isinstance(site_data, dict) else site_data[3] + site_comment = site_data["comment"] if isinstance(site_data, dict) else site_data[4] + + # Skip if filtering by site and not the target site + if TARGET_SITE and site_name != TARGET_SITE: + continue + + # Check if site exists or create it + existing_site = netbox.dcim.get_sites(name=site_name) + if not existing_site: + # Skip if this is likely a location rather than a site + if len(site_name) > SITE_NAME_LENGTH_THRESHOLD: + print(f"Skipping probable location (address): {site_name}") + continue + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + print(f"Creating site: {site_name}") + try: + netbox.dcim.create_site(site_name, slugify(site_name), **tenant_param) + except Exception as e: + print(f"Failed to create site {site_name}: {e}") + continue + + # Process rows in this site + create_rows_and_racks(netbox, site_id, site_name) + +def create_rows_and_racks(netbox, site_id, site_name): + """ + Create rows and racks for a site + + Args: + netbox: NetBox client instance + site_id: Racktables site ID + site_name: Site name + """ + # Get all rows in this site + for row_id, row_name, row_label, row_asset_no, row_comment in getRowsAtSite(site_id): + # Process racks in this row + for rack_id, rack_name, rack_label, rack_asset_no, rack_comment in getRacksAtRow(row_id): + # Get rack height and tags + rack_tags = getTags("rack", rack_id) + rack_height = getRackHeight(rack_id) + + # Format the rack name to include site and row + if not rack_name.startswith(row_name.rstrip(".") + "."): + rack_name = site_name + "." + row_name + "." + rack_name + else: + rack_name = site_name + "." + rack_name + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + print(f"Creating rack: {rack_name}") + try: + # Create the rack + rack = netbox.dcim.create_rack( + name=rack_name, + comment=rack_comment[:200] if rack_comment else "", + site_name=site_name, + u_height=rack_height, + tags=rack_tags, + **tenant_param # Add tenant parameter + ) + + # Add rack to global tracking + print(f"Created rack {rack_name} (ID: {rack['id']})") + except Exception as e: + print(f"Failed to create rack {rack_name}: {e}") diff --git a/migration/utils.py b/migration/utils.py new file mode 100644 index 0000000..33cea17 --- /dev/null +++ b/migration/utils.py @@ -0,0 +1,172 @@ +""" +Utility functions for the Racktables to NetBox migration tool +""" +import os +import pickle +import time +from contextlib import contextmanager +import pymysql +from slugify import slugify + +from migration.config import DB_CONFIG, STORE_DATA, TARGET_TENANT_ID + +def error_log(string): + """ + Log an error message to the errors file + + Args: + string: Error message to log + """ + with open("errors", "a") as error_file: + error_file.write(string + "\n") + +def pickleLoad(filename, default): + """ + Load data from a pickle file with fallback to default value + + Args: + filename: Path to pickle file + default: Default value to return if file doesn't exist + + Returns: + Unpickled data or default value + """ + if os.path.exists(filename): + with open(filename, 'rb') as file: + data = pickle.load(file) + return data + return default + +def pickleDump(filename, data): + """ + Save data to a pickle file if storage is enabled + + Args: + filename: Path to pickle file + data: Data to pickle + """ + if STORE_DATA: + with open(filename, 'wb') as file: + pickle.dump(data, file) + +@contextmanager +def get_db_connection(): + """ + Create a database connection context manager + + Yields: + pymysql.Connection: Database connection + """ + connection = None + try: + connection = pymysql.connect(**DB_CONFIG) + yield connection + except pymysql.MySQLError as e: + print(f"Database connection error: {e}") + raise + finally: + if connection: + connection.close() + +@contextmanager +def get_cursor(connection): + """ + Create a database cursor context manager + + Args: + connection: Database connection + + Yields: + pymysql.cursors.Cursor: Database cursor + """ + cursor = None + try: + cursor = connection.cursor() + yield cursor + finally: + if cursor: + cursor.close() + +def create_global_tags(netbox, tags): + """ + Create tags in NetBox if they don't already exist + + Args: + netbox: NetBox client instance + tags: Set of tag names to create + """ + # Convert tags from NetBox to a list of names + tag_objects = list(netbox.extras.get_tags()) + global_tags = set() + + for tag in tag_objects: + if hasattr(tag, 'name'): + global_tags.add(tag.name) + elif isinstance(tag, dict) and 'name' in tag: + global_tags.add(tag['name']) + + for tag in tags: + if tag not in global_tags: + try: + netbox.extras.create_tag(tag, slugify(tag)) + except Exception as e: + print(f"Error creating tag {tag}: {e}") + global_tags.add(tag) + +def ensure_tag_exists(netbox, tag_name): + """ + Ensure a tag exists in NetBox before using it + + Args: + netbox: NetBox client instance + tag_name: Name of the tag + + Returns: + bool: True if tag exists or was created, False otherwise + """ + try: + # Check if tag exists + tags = list(netbox.extras.get_tags(name=tag_name)) + if tags: + return True + + # Create the tag if it doesn't exist + tag_slug = slugify(tag_name) + netbox.extras.create_tag( + name=tag_name, + slug=tag_slug + ) + print(f"Created tag: {tag_name}") + return True + except Exception as e: + print(f"Failed to create tag {tag_name}: {e}") + return False + +def format_prefix_description(prefix_name, tags, comment): + """ + Format a description for a prefix including name, tags, and comment + + Args: + prefix_name: Name of the prefix + tags: List of tag objects + comment: Comment for the prefix + + Returns: + str: Formatted description string + """ + # Extract tag names consistently whether they're objects or dicts + tag_names = [] + for tag in tags: + if hasattr(tag, 'name'): + tag_names.append(tag.name) + elif isinstance(tag, dict) and 'name' in tag: + tag_names.append(tag['name']) + + tag_str = ", ".join(tag_names) if tag_names else "" + description = f"{prefix_name}" + if tag_str: + description += f" [{tag_str}]" + if comment: + description += f" - {comment}" if description else comment + + return description[:200] if description else "" diff --git a/migration/vlans.py b/migration/vlans.py new file mode 100644 index 0000000..ef37f5e --- /dev/null +++ b/migration/vlans.py @@ -0,0 +1,137 @@ +""" +VLAN-related migration functions +""" +from slugify import slugify + +from racktables_netbox_migration.utils import get_db_connection, get_cursor, pickleDump + +def create_vlan_groups(netbox): + """ + Create VLAN groups from Racktables in NetBox + + Args: + netbox: NetBox client instance + """ + print("Creating VLAN Groups") + + # Map VLAN domain IDs to names + vlan_domain_id_names = {} + + # Get existing VLAN groups to avoid duplicates + existing_vlan_groups = set(vlan_group['name'] for vlan_group in netbox.ipam.get_vlan_groups()) + + # Get VLAN domains from Racktables + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id,description FROM VLANDomain") + vlan_domains = cursor.fetchall() + + for row in vlan_domains: + domain_id, description = row["id"], row["description"] + + vlan_domain_id_names[domain_id] = description + + # Skip if VLAN group already exists + if description in existing_vlan_groups: + print(f"VLAN group {description} already exists") + continue + + # Create the VLAN group + try: + netbox.ipam.create_vlan_group( + name=description, + slug=slugify(description), + custom_fields={"VLAN_Domain_ID": str(domain_id)} + ) + + print(f"Created VLAN group: {description}") + existing_vlan_groups.add(description) + except Exception as e: + print(f"Error creating VLAN group {description}: {e}") + + return vlan_domain_id_names + +def create_vlans(netbox): + """ + Create VLANs from Racktables in NetBox + + Args: + netbox: NetBox client instance + """ + print("Creating VLANs") + + # Get VLAN domain mappings + vlan_domain_id_names = {} + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute("SELECT id,description FROM VLANDomain") + for row in cursor.fetchall(): + vlan_domain_id_names[row["id"]] = row["description"] + + # Track VLAN mappings for network associations + network_id_group_name_id = {} + + # Track VLANs by group to ensure unique names + vlans_for_group = {} + + # Process IPv4 and IPv6 VLANs + for IP in ("4", "6"): + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + cursor.execute(f"SELECT domain_id,vlan_id,ipv{IP}net_id FROM VLANIPv{IP}") + vlans = cursor.fetchall() + + for row in vlans: + domain_id, vlan_id, net_id = row["domain_id"], row["vlan_id"], row[f"ipv{IP}net_id"] + + # Get VLAN description + cursor.execute( + "SELECT vlan_descr FROM VLANDescription WHERE domain_id=%s AND vlan_id=%s", + (domain_id, vlan_id) + ) + result = cursor.fetchone() + vlan_name = result["vlan_descr"] if result else None + + # Skip if no name available + if not vlan_name: + continue + + # Get VLAN group name + vlan_group_name = vlan_domain_id_names[domain_id] + + # Initialize tracking for this group + if vlan_group_name not in vlans_for_group: + vlans_for_group[vlan_group_name] = set() + + # Ensure unique name within group + name = vlan_name + if name in vlans_for_group[vlan_group_name]: + counter = 1 + while True: + name = f"{vlan_name}-{counter}" + if name not in vlans_for_group[vlan_group_name]: + break + counter += 1 + + # Create the VLAN + try: + created_vlan = netbox.ipam.create_vlan( + group={"name": vlan_group_name}, + vid=vlan_id, + vlan_name=name + ) + + # Store mapping for network association + network_id_group_name_id[net_id] = (vlan_group_name, name, created_vlan['id']) + + # Track created VLAN name + vlans_for_group[vlan_group_name].add(name) + + print(f"Created VLAN {name} (ID: {vlan_id}) in group {vlan_group_name}") + except Exception as e: + print(f"Error creating VLAN {name} (ID: {vlan_id}): {e}") + + # Save network to VLAN mappings for IP networks creation + pickleDump('network_id_group_name_id', network_id_group_name_id) + + return network_id_group_name_id diff --git a/migration/vms.py b/migration/vms.py new file mode 100644 index 0000000..3a88454 --- /dev/null +++ b/migration/vms.py @@ -0,0 +1,295 @@ +""" +Virtual machine creation and management functions +""" +from slugify import slugify + +from racktables_netbox_migration.utils import get_db_connection, get_cursor +from racktables_netbox_migration.db import getTags +from racktables_netbox_migration.config import TARGET_SITE, TARGET_SITE_ID, TARGET_TENANT, TARGET_TENANT_ID + +def create_vms(netbox, create_mounted=True, create_unmounted=True): + """ + Create VMs and their clusters in NetBox + + Args: + netbox: NetBox client instance + create_mounted: Whether to create VMs in clusters + create_unmounted: Whether to create VMs not in clusters + """ + # Skip if not creating any VMs + if not create_mounted and not create_unmounted: + return + + print("Creating VM clusters and virtual machines") + + # Get existing VM data to avoid duplicates + existing_cluster_types = set(cluster_type['name'] for cluster_type in netbox.virtualization.get_cluster_types()) + existing_cluster_names = set(cluster['name'] for cluster in netbox.virtualization.get_clusters()) + + # If tenant filtering is enabled, filter VMs by tenant + vm_filters = {} + if TARGET_TENANT_ID: + vm_filters["tenant_id"] = TARGET_TENANT_ID + + existing_virtual_machines = set(virtual_machine['name'] for virtual_machine in netbox.virtualization.get_virtual_machines(**vm_filters)) + + # Site filtering for clusters + site_filter = {} + if TARGET_SITE_ID: + site_filter = {"site": TARGET_SITE_ID} + print(f"Filtering VMs by site ID: {TARGET_SITE_ID}") + + # Create VMs in clusters if enabled + if create_mounted: + create_mounted_vms( + netbox, + existing_cluster_types, + existing_cluster_names, + existing_virtual_machines, + site_filter + ) + + # Create VMs not in clusters if enabled + if create_unmounted: + create_unmounted_vms( + netbox, + existing_cluster_types, + existing_cluster_names, + existing_virtual_machines, + site_filter + ) + +def create_mounted_vms(netbox, existing_cluster_types, existing_cluster_names, existing_virtual_machines, site_filter={}): + """ + Create VMs that exist in clusters + + Args: + netbox: NetBox client instance + existing_cluster_types: Set of existing cluster type names + existing_cluster_names: Set of existing cluster names + existing_virtual_machines: Set of existing VM names + site_filter: Optional site filter dict + """ + print("Creating VMs in clusters") + + vm_counter = 0 + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + # Get clusters from Racktables + cursor.execute("SELECT id,name,asset_no,label FROM Object WHERE objtype_id=1505") + clusters = cursor.fetchall() + + for row in clusters: + cluster_id, cluster_name, asset_no, label = row["id"], row["name"], row["asset_no"], row["label"] + + # Create cluster type if needed + if cluster_name not in existing_cluster_types: + try: + netbox.virtualization.create_cluster_type( + cluster_name, + slugify(cluster_name) + ) + existing_cluster_types.add(cluster_name) + print(f"Created cluster type: {cluster_name}") + except Exception as e: + print(f"Error creating cluster type {cluster_name}: {e}") + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create cluster if needed + if cluster_name not in existing_cluster_names: + try: + netbox.virtualization.create_cluster( + cluster_name, + cluster_name, + **site_filter, + **tenant_param + ) + existing_cluster_names.add(cluster_name) + print(f"Created cluster: {cluster_name}") + except Exception as e: + print(f"Error creating cluster {cluster_name}: {e}") + + # Get VMs in this cluster + cursor.execute( + "SELECT child_entity_type,child_entity_id FROM EntityLink WHERE parent_entity_id=%s", + (cluster_id,) + ) + child_virtual_machines = cursor.fetchall() + + for child_row in child_virtual_machines: + child_entity_type, child_entity_id = child_row["child_entity_type"], child_row["child_entity_id"] + + # Get VM details + cursor.execute( + "SELECT name,label,comment,objtype_id,asset_no FROM Object WHERE id=%s", + (child_entity_id,) + ) + vm_row = cursor.fetchone() + + if not vm_row: + continue + + vm_name = vm_row["name"] + vm_label = vm_row["label"] + vm_comment = vm_row["comment"] + vm_objtype_id = vm_row["objtype_id"] + vm_asset_no = vm_row["asset_no"] + + # Skip if not a VM or no name + if vm_objtype_id != 1504 or not vm_name: + continue + + vm_name = vm_name.strip() + + # Skip if VM already exists + if vm_name in existing_virtual_machines: + print(f"VM {vm_name} already exists") + continue + + # Get VM tags + vm_tags = getTags("object", child_entity_id) + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create the VM + try: + netbox.virtualization.create_virtual_machine( + vm_name, + cluster_name, + tags=vm_tags, + comments=vm_comment[:200] if vm_comment else "", + custom_fields={ + "VM_Label": vm_label[:200] if vm_label else "", + "VM_Asset_No": vm_asset_no if vm_asset_no else "" + }, + **tenant_param # Add tenant parameter + ) + + existing_virtual_machines.add(vm_name) + vm_counter += 1 + print(f"Created VM {vm_name} in cluster {cluster_name}") + except Exception as e: + print(f"Error creating VM {vm_name}: {e}") + + print(f"Created {vm_counter} VMs in clusters") + +def create_unmounted_vms(netbox, existing_cluster_types, existing_cluster_names, existing_virtual_machines, site_filter={}): + """ + Create VMs that are not in clusters + + Args: + netbox: NetBox client instance + existing_cluster_types: Set of existing cluster type names + existing_cluster_names: Set of existing cluster names + existing_virtual_machines: Set of existing VM names + site_filter: Optional site filter dict + """ + print("Creating unmounted VMs") + + # Create a special cluster for unmounted VMs + unmounted_cluster_name = "Unmounted Cluster" + + # Create cluster type if needed + if unmounted_cluster_name not in existing_cluster_types: + try: + netbox.virtualization.create_cluster_type( + unmounted_cluster_name, + slugify(unmounted_cluster_name) + ) + existing_cluster_types.add(unmounted_cluster_name) + print(f"Created cluster type: {unmounted_cluster_name}") + except Exception as e: + print(f"Error creating cluster type {unmounted_cluster_name}: {e}") + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create cluster if needed + if unmounted_cluster_name not in existing_cluster_names: + try: + netbox.virtualization.create_cluster( + unmounted_cluster_name, + unmounted_cluster_name, + **site_filter, + **tenant_param + ) + existing_cluster_names.add(unmounted_cluster_name) + print(f"Created cluster: {unmounted_cluster_name}") + except Exception as e: + print(f"Error creating cluster {unmounted_cluster_name}: {e}") + + # Get all VMs from Racktables that aren't in a cluster + with get_db_connection() as connection: + with get_cursor(connection) as cursor: + # Get all VMs + cursor.execute("SELECT id,name,label,comment,objtype_id,asset_no FROM Object WHERE objtype_id=1504") + vms = cursor.fetchall() + + # Get VMs that are in clusters + cursor.execute(""" + SELECT child_entity_id + FROM EntityLink + WHERE parent_entity_type='object' + AND child_entity_type='object' + AND child_entity_id IN (SELECT id FROM Object WHERE objtype_id=1504) + """) + mounted_vm_ids = set(row["child_entity_id"] for row in cursor.fetchall()) + + # Process VMs not in clusters + vm_counter = 0 + for vm in vms: + vm_id = vm["id"] + vm_name = vm["name"] + vm_label = vm["label"] + vm_comment = vm["comment"] + vm_asset_no = vm["asset_no"] + + # Skip if already in a cluster or no name + if vm_id in mounted_vm_ids or not vm_name: + continue + + vm_name = vm_name.strip() + + # Skip if VM already exists + if vm_name in existing_virtual_machines: + print(f"VM {vm_name} already exists") + continue + + # Get VM tags + vm_tags = getTags("object", vm_id) + + # Add tenant parameter if TARGET_TENANT_ID is specified + tenant_param = {} + if TARGET_TENANT_ID: + tenant_param = {"tenant": TARGET_TENANT_ID} + + # Create the VM + try: + netbox.virtualization.create_virtual_machine( + vm_name, + unmounted_cluster_name, + tags=vm_tags, + comments=vm_comment[:200] if vm_comment else "", + custom_fields={ + "VM_Label": vm_label[:200] if vm_label else "", + "VM_Asset_No": vm_asset_no if vm_asset_no else "" + }, + **tenant_param # Add tenant parameter + ) + + existing_virtual_machines.add(vm_name) + vm_counter += 1 + print(f"Created unmounted VM: {vm_name}") + except Exception as e: + print(f"Error creating VM {vm_name}: {e}") + + print(f"Created {vm_counter} unmounted VMs") diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..d845c0c --- /dev/null +++ b/requirements.txt @@ -0,0 +1,6 @@ +pynetbox>=6.6.0 +python-slugify>=5.0.0 +pymysql>=1.0.0 +ipaddress>=1.0.0 +requests>=2.25.0 +beautifulsoup4>=4.9.0 diff --git a/free.py b/scripts/free.py similarity index 100% rename from free.py rename to scripts/free.py diff --git a/scripts/http-based-import.py b/scripts/http-based-import.py new file mode 100644 index 0000000..e7da893 --- /dev/null +++ b/scripts/http-based-import.py @@ -0,0 +1,272 @@ +#!/usr/bin/env python3 +""" +Network prefix import script using direct HTTP API calls +No reliance on pynetbox library for maximum compatibility +""" +import re +import sys +import time +import json +import requests +import urllib3 + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + +# NetBox connection details - UPDATE THESE +NETBOX_URL = "http://localhost:8000" +API_TOKEN = "YOUR API TOKEN HERE" # Your token here +INPUT_FILE = "paste.txt" # File containing network data which is basically just a copy paste from CTRL+A and CTRL+V from this tab: https://racktables.yourdomain.come/index.php?page=ipv4space&tab=default&eid=ALL + +# Constants +VERIFY_SSL = False # Set to True if using valid SSL certificate +BATCH_SIZE = 10 # Number of prefixes to process at once +DELAY = 0.5 # Delay between batches in seconds + +def test_api_connection(): + """Test basic connection to NetBox API""" + # Create session with standard headers + session = requests.Session() + session.headers.update({ + "Authorization": f"Token {API_TOKEN}", + "Content-Type": "application/json", + "Accept": "application/json" + }) + session.verify = VERIFY_SSL + + try: + # Check if we can connect to the API + response = session.get(f"{NETBOX_URL}/api/status/") + if response.status_code == 200: + data = response.json() + version = data.get("netbox-version", "unknown") + print(f"✅ Connected to NetBox {version}") + + # Test authentication by trying to access an authenticated endpoint + auth_response = session.get(f"{NETBOX_URL}/api/users/users/") + if auth_response.status_code == 200: + print(f"✅ Authentication successful") + return session + elif auth_response.status_code == 403: + print(f"❌ Authentication failed: Permission denied") + print(f"Your token might not have the required permissions") + return None + else: + print(f"❌ Authentication failed: {auth_response.status_code}") + print(f"Response: {auth_response.text}") + return None + else: + print(f"❌ Connection failed: HTTP {response.status_code}") + print(f"Response: {response.text}") + return None + except Exception as e: + print(f"❌ Connection error: {str(e)}") + return None + +def read_prefixes_from_file(filename): + """Read network prefixes from a file""" + try: + with open(filename, 'r') as f: + content = f.read() + except Exception as e: + print(f"Error reading file {filename}: {str(e)}") + return [] + + # Extract network prefixes using regex + pattern = r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/\d{1,2})\s+([^\t]+)\t+(\d+)' + matches = re.findall(pattern, content) + + # Convert matches to a list of dicts with the data we need + prefixes = [] + for prefix, name, capacity in matches: + name = name.strip() + if name == "[Here be dragons.] [create network here]": + name = f"Unused network - {prefix}" + + prefixes.append({ + "prefix": prefix, + "description": name, + "status": "active" + }) + + return prefixes + +def create_test_prefix(session): + """Test creating a single prefix""" + test_prefix = "192.168.254.0/24" + test_data = { + "prefix": test_prefix, + "description": "Test Prefix - Delete Me", + "status": "active" + } + + print(f"Testing prefix creation with {test_prefix}...") + + # Check if the prefix already exists + check_url = f"{NETBOX_URL}/api/ipam/prefixes/?prefix={test_prefix}" + try: + response = session.get(check_url) + exists = False + + if response.status_code == 200: + data = response.json() + if data["count"] > 0: + exists = True + existing_id = data["results"][0]["id"] + print(f"Test prefix already exists (ID: {existing_id})") + + # Try to delete it + delete_response = session.delete(f"{NETBOX_URL}/api/ipam/prefixes/{existing_id}/") + if delete_response.status_code == 204: + print(f"Successfully deleted existing test prefix") + else: + print(f"Could not delete existing prefix: HTTP {delete_response.status_code}") + print(f"Response: {delete_response.text}") + else: + print(f"Error checking for existing prefix: HTTP {response.status_code}") + print(f"Response: {response.text}") + return False + except Exception as e: + print(f"Error checking for existing prefix: {str(e)}") + return False + + # Create the test prefix + try: + response = session.post( + f"{NETBOX_URL}/api/ipam/prefixes/", + data=json.dumps(test_data) + ) + + if response.status_code == 201: + data = response.json() + prefix_id = data.get("id") + print(f"✅ Test prefix created successfully (ID: {prefix_id})") + + # Try to delete it + delete_response = session.delete(f"{NETBOX_URL}/api/ipam/prefixes/{prefix_id}/") + if delete_response.status_code == 204: + print(f"✅ Test prefix deleted successfully") + else: + print(f"⚠️ Could not delete test prefix: HTTP {delete_response.status_code}") + + return True + else: + print(f"❌ Error creating test prefix: HTTP {response.status_code}") + print(f"Response: {response.text}") + return False + except Exception as e: + print(f"❌ Error creating test prefix: {str(e)}") + return False + +def import_prefixes(session, prefixes): + """Import prefixes into NetBox""" + success_count = 0 + error_count = 0 + skip_count = 0 + + # Process prefixes in batches + total_batches = (len(prefixes) + BATCH_SIZE - 1) // BATCH_SIZE + + for batch_index in range(total_batches): + start_idx = batch_index * BATCH_SIZE + end_idx = min(start_idx + BATCH_SIZE, len(prefixes)) + batch = prefixes[start_idx:end_idx] + + print(f"\nProcessing batch {batch_index + 1}/{total_batches} ({len(batch)} prefixes)") + + for prefix_data in batch: + prefix = prefix_data["prefix"] + + # Check if prefix already exists + check_url = f"{NETBOX_URL}/api/ipam/prefixes/?prefix={prefix}" + try: + response = session.get(check_url) + + if response.status_code == 200: + data = response.json() + if data["count"] > 0: + print(f" Skipping existing prefix: {prefix}") + skip_count += 1 + continue + else: + print(f" Warning: Error checking for existing prefix: HTTP {response.status_code}") + except Exception as e: + print(f" Warning: Error checking for existing prefix: {str(e)}") + + # Create the prefix + try: + response = session.post( + f"{NETBOX_URL}/api/ipam/prefixes/", + data=json.dumps(prefix_data) + ) + + if response.status_code == 201: + data = response.json() + prefix_id = data.get("id") + print(f" Created: {prefix} - {prefix_data['description']} (ID: {prefix_id})") + success_count += 1 + else: + print(f" Error creating {prefix}: HTTP {response.status_code}") + print(f" Response: {response.text}") + error_count += 1 + except Exception as e: + print(f" Error creating {prefix}: {str(e)}") + error_count += 1 + + # Delay between batches to avoid overwhelming the API + if batch_index < total_batches - 1: + time.sleep(DELAY) + + return success_count, skip_count, error_count + +def main(): + """Main function to run the import""" + print(f"Direct API Network Import Script") + print(f"-------------------------------") + + # Check connection to NetBox + print("\n1. Testing API connection and authentication...") + session = test_api_connection() + if not session: + print("Aborting due to connection or authentication issues.") + return 1 + + # Test prefix creation + print("\n2. Testing prefix creation capability...") + if not create_test_prefix(session): + print("Your token doesn't have permission to create prefixes.") + print("Please update your token permissions and try again.") + return 1 + + # Read prefixes from file + print("\n3. Reading prefixes from file...") + prefixes = read_prefixes_from_file(INPUT_FILE) + if not prefixes: + print(f"No prefixes found in {INPUT_FILE}. Aborting.") + return 1 + + print(f"Found {len(prefixes)} prefixes to import") + + # Confirm before proceeding + confirm = input("\nContinue with import? (y/n): ") + if confirm.lower() != 'y': + print("Import cancelled.") + return 0 + + # Import prefixes + print("\n4. Importing prefixes...") + start_time = time.time() + success, skipped, errors = import_prefixes(session, prefixes) + elapsed = time.time() - start_time + + # Print summary + print(f"\nImport completed in {elapsed:.1f} seconds") + print(f"Summary:") + print(f"- Prefixes created: {success}") + print(f"- Prefixes skipped (already exist): {skipped}") + print(f"- Errors: {errors}") + + return 0 + +if __name__ == "__main__": + sys.exit(main()) diff --git a/scripts/max-page-size-check.sh b/scripts/max-page-size-check.sh new file mode 100644 index 0000000..3ef44cc --- /dev/null +++ b/scripts/max-page-size-check.sh @@ -0,0 +1,70 @@ +#!/bin/bash +# Script to check or modify MAX_PAGE_SIZE in NetBox Docker environment + +# Path to your NetBox Docker installation - MODIFY THIS +NETBOX_DOCKER_PATH="/path/to/netbox-docker" +NETBOX_ENV_FILE="$NETBOX_DOCKER_PATH/env/netbox.env" + +# Check if netbox.env exists +if [ ! -f "$NETBOX_ENV_FILE" ]; then + echo "⚠️ NetBox environment file not found at: $NETBOX_ENV_FILE" + echo "Please update the NETBOX_DOCKER_PATH variable in this script." + exit 1 +fi + +# Check if MAX_PAGE_SIZE setting exists +if grep -q "^MAX_PAGE_SIZE=" "$NETBOX_ENV_FILE"; then + # Get current value + current_value=$(grep "^MAX_PAGE_SIZE=" "$NETBOX_ENV_FILE" | cut -d= -f2) + echo "Current MAX_PAGE_SIZE setting: $current_value" + + # Check if it's already set to 0 + if [ "$current_value" == "0" ]; then + echo "✅ MAX_PAGE_SIZE is already set to 0" + else + # Ask if user wants to change it + read -p "Do you want to change MAX_PAGE_SIZE to 0? (y/n): " change_it + if [[ $change_it == "y" || $change_it == "Y" ]]; then + # Replace existing setting + sed -i 's/^MAX_PAGE_SIZE=.*/MAX_PAGE_SIZE=0/' "$NETBOX_ENV_FILE" + echo "✅ Updated MAX_PAGE_SIZE to 0" + echo "You need to restart NetBox for this change to take effect." + read -p "Do you want to restart NetBox now? (y/n): " restart_now + if [[ $restart_now == "y" || $restart_now == "Y" ]]; then + echo "Restarting NetBox..." + cd "$NETBOX_DOCKER_PATH" && docker compose restart netbox + echo "✅ NetBox has been restarted" + else + echo "⚠️ Remember to restart NetBox manually:" + echo "cd $NETBOX_DOCKER_PATH && docker compose restart netbox" + fi + else + echo "MAX_PAGE_SIZE left unchanged at: $current_value" + fi + fi +else + # Setting doesn't exist, ask to add it + echo "MAX_PAGE_SIZE setting not found in netbox.env" + read -p "Do you want to add MAX_PAGE_SIZE=0 to netbox.env? (y/n): " add_it + if [[ $add_it == "y" || $add_it == "Y" ]]; then + # Add the setting + echo "MAX_PAGE_SIZE=0" >> "$NETBOX_ENV_FILE" + echo "✅ Added MAX_PAGE_SIZE=0 to netbox.env" + echo "You need to restart NetBox for this change to take effect." + read -p "Do you want to restart NetBox now? (y/n): " restart_now + if [[ $restart_now == "y" || $restart_now == "Y" ]]; then + echo "Restarting NetBox..." + cd "$NETBOX_DOCKER_PATH" && docker compose restart netbox + echo "✅ NetBox has been restarted" + else + echo "⚠️ Remember to restart NetBox manually:" + echo "cd $NETBOX_DOCKER_PATH && docker compose restart netbox" + fi + else + echo "MAX_PAGE_SIZE setting not added" + fi +fi + +echo "" +echo "Note: MAX_PAGE_SIZE=0 is required for the Racktables to NetBox migration tool" +echo "to properly fetch all objects in a single request." diff --git a/rhevm_pull.py b/scripts/rhevm_pull.py similarity index 100% rename from rhevm_pull.py rename to scripts/rhevm_pull.py diff --git a/setup.py b/setup.py new file mode 100644 index 0000000..fe81f17 --- /dev/null +++ b/setup.py @@ -0,0 +1,38 @@ +#!/usr/bin/env python3 +""" +Setup script for the Racktables to NetBox migration tool +""" +from setuptools import setup, find_packages + +setup( + name="racktables-netbox-migration", + version="1.0.0", + description="Tool to migrate data from Racktables to NetBox", + author="Your Name", + author_email="your.email@example.com", + url="https://github.com/yourusername/racktables-netbox-migration", + packages=find_packages(), + install_requires=[ + "pynetbox>=6.6.0", + "python-slugify>=5.0.0", + "pymysql>=1.0.0", + "ipaddress>=1.0.0", + "requests>=2.25.0" + ], + entry_points={ + "console_scripts": [ + "migrate-racktables=migrate_wrapper:main", + ], + }, + classifiers=[ + "Development Status :: 4 - Beta", + "Intended Audience :: System Administrators", + "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.6", + "Programming Language :: Python :: 3.7", + "Programming Language :: Python :: 3.8", + "Programming Language :: Python :: 3.9", + ], + python_requires=">=3.6", +) diff --git a/setup_dev.sh b/setup_dev.sh new file mode 100644 index 0000000..d68314d --- /dev/null +++ b/setup_dev.sh @@ -0,0 +1,365 @@ +#!/bin/bash +# Enhanced setup script for development environment with all components from scratch + +print_usage() { + echo "Usage: $0 [OPTIONS]" + echo "Sets up the development environment for Racktables to NetBox migration tool" + echo "" + echo "Options:" + echo " --netbox Set up NetBox Docker environment with proper configuration" + echo " --gitclone Setup minimal requirements after a git clone" + echo " --package Set up for package distribution" + echo " --help Display this help message" + echo "" + echo "Without options, runs standard development environment setup" +} + +# Parse arguments +SETUP_NETBOX=false +SETUP_GITCLONE=false +SETUP_PACKAGE=false + +while [[ "$#" -gt 0 ]]; do + case $1 in + --netbox) SETUP_NETBOX=true ;; + --gitclone) SETUP_GITCLONE=true ;; + --package) SETUP_PACKAGE=true ;; + --help) print_usage; exit 0 ;; + *) echo "Unknown parameter: $1"; print_usage; exit 1 ;; + esac + shift +done + +# If no options provided, run standard setup +if [[ "$SETUP_NETBOX" == "false" && "$SETUP_GITCLONE" == "false" && "$SETUP_PACKAGE" == "false" ]]; then + echo "Running standard development setup..." + SETUP_GITCLONE=true +fi + +# Function to set up NetBox Docker +setup_netbox() { + echo "Setting up NetBox environment..." + + # Generate secure credentials + echo "Generating secure credentials..." + SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 64) + API_TOKEN=$(cat /dev/urandom | tr -dc 'a-z0-9' | head -c 40) + SUPERUSER_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 8) + POSTGRES_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 24) + + # Check for required system packages + echo "Checking for required system packages..." + if ! command -v python3 &> /dev/null; then + echo "Error: Python 3 is not installed. Installing required packages..." + sudo apt update + sudo apt install -y python3 python3-pip python3-venv python3-dev build-essential + fi + + # Check if Docker is installed + if ! command -v docker &> /dev/null; then + echo "Error: Docker is not installed. Please install Docker first." + return 1 + fi + + # Check if Docker Compose is installed + if ! command -v docker compose &> /dev/null; then + echo "Error: Docker Compose is not installed. Please install Docker Compose first." + return 1 + fi + + # Clone NetBox Docker repository + if [ ! -d "netbox-docker" ]; then + echo "Cloning NetBox Docker repository..." + git clone -b release https://github.com/netbox-community/netbox-docker.git + cd netbox-docker || return 1 + else + echo "NetBox Docker directory already exists, updating..." + cd netbox-docker || return 1 + git pull + fi + + # Create override with admin credentials and port mapping + echo "Creating docker-compose.override.yml with credentials..." + tee docker-compose.override.yml < .netbox_creds + echo "NETBOX_PORT=8000" >> .netbox_creds + echo "NETBOX_TOKEN=$API_TOKEN" >> .netbox_creds + echo "NETBOX_PASSWORD=$SUPERUSER_PASSWORD" >> .netbox_creds + echo "POSTGRES_PASSWORD=$POSTGRES_PASSWORD" >> .netbox_creds + + echo "NetBox setup complete." + echo "Access NetBox at http://localhost:8000" + echo "Username: admin" + echo "Password: $SUPERUSER_PASSWORD" + echo "API Token: $API_TOKEN" + echo "Postgres Password: $POSTGRES_PASSWORD" +} + +# Function to set up after git clone +setup_gitclone() { + # Check for prerequisites + echo "Checking for prerequisites..." + + # Make sure python3-venv is installed + if ! dpkg -l | grep -q python3-venv; then + echo "Installing python3-venv package..." + sudo apt update + sudo apt install -y python3-venv python3-pip + fi + + if ! command -v git &>/dev/null; then + echo "Error: Git is required but not installed. Please install git first." + return 1 + fi + + if ! command -v python3 &>/dev/null; then + echo "Error: Python 3 is required but not installed. Please install Python 3 first." + return 1 + fi + + # Clone the repository if needed + if [ ! -d ".git" ]; then + TEMP_DIR="racktables-migration" + echo "Cloning to $TEMP_DIR..." + git clone https://github.com/enoch85/racktables-to-netbox.git $TEMP_DIR + echo "Moving files from $TEMP_DIR to current directory..." + cp -r $TEMP_DIR/* $TEMP_DIR/.??* . 2>/dev/null || true + rm -rf $TEMP_DIR + fi + + # Create virtual environment + echo "Creating virtual environment..." + python3 -m venv venv + + # Activate virtual environment + echo "Activating virtual environment..." + source venv/bin/activate + + # Install dependencies + echo "Installing dependencies..." + pip install --upgrade pip + + # Install all requirements + pip install -r requirements.txt + + # Create symlink for module compatibility + echo "Creating symlink for racktables_netbox_migration..." + [ -d "migration" ] && [ ! -L "racktables_netbox_migration" ] && ln -s migration racktables_netbox_migration + + # Install package in development mode + echo "Installing package in development mode..." + pip install -e . + + # Update config.py with correct credentials + if [ -f ".netbox_creds" ]; then + echo "Using NetBox credentials from setup" + source .netbox_creds + if [ -f "migration/config.py" ]; then + sed -i "s/NB_TOKEN = os.environ.get('NETBOX_TOKEN', '[^']*')/NB_TOKEN = os.environ.get('NETBOX_TOKEN', '${NETBOX_TOKEN}')/" migration/config.py + sed -i "s/'password': os.environ.get('RACKTABLES_DB_PASSWORD', 'secure-password')/'password': os.environ.get('RACKTABLES_DB_PASSWORD', 'your-database-password')/" migration/config.py + echo "Updated config.py with NetBox credentials" + fi + else + # Prompt for database credentials + read -p "Enter your Racktables database host: " DB_HOST + read -p "Enter your Racktables database username: " DB_USER + read -s -p "Enter your Racktables database password: " DB_PASS + echo "" + read -p "Enter your Racktables database name: " DB_NAME + + # Update config.py + if [ -f "migration/config.py" ]; then + sed -i "s/'host': os.environ.get('RACKTABLES_DB_HOST', '[^']*')/'host': os.environ.get('RACKTABLES_DB_HOST', '${DB_HOST}')/" migration/config.py + sed -i "s/'user': os.environ.get('RACKTABLES_DB_USER', '[^']*')/'user': os.environ.get('RACKTABLES_DB_USER', '${DB_USER}')/" migration/config.py + sed -i "s/'password': os.environ.get('RACKTABLES_DB_PASSWORD', '[^']*')/'password': os.environ.get('RACKTABLES_DB_PASSWORD', '${DB_PASS}')/" migration/config.py + sed -i "s/'db': os.environ.get('RACKTABLES_DB_NAME', '[^']*')/'db': os.environ.get('RACKTABLES_DB_NAME', '${DB_NAME}')/" migration/config.py + fi + fi + + echo "Git clone setup complete!" + echo "You can now run migrate.py:" + echo "python migration/migrate.py [--site SITE_NAME]" +} + +# Function to set up for package distribution +setup_package() { + echo "Setting up for package distribution..." + + # Check if we're in a virtual environment + if [[ -z "$VIRTUAL_ENV" ]]; then + echo "Creating virtual environment for package building..." + python3 -m venv venv-build + source venv-build/bin/activate + fi + + # Install build dependencies + echo "Installing build dependencies..." + pip install build twine wheel setuptools + + # Create symlink for module compatibility + echo "Creating symlink for racktables_netbox_migration..." + [ -d "migration" ] && [ ! -L "racktables_netbox_migration" ] && ln -s migration racktables_netbox_migration + + # Update version in setup.py if needed + if [ -f "migration/__init__.py" ]; then + VERSION=$(grep -o '__version__ = "[^"]*"' migration/__init__.py | cut -d'"' -f2) + if [ -n "$VERSION" ]; then + echo "Detected version: $VERSION" + sed -i "s/version=\"[^\"]*\"/version=\"$VERSION\"/g" setup.py 2>/dev/null || true + fi + fi + + # Create necessary files for distribution + if [ ! -f "setup.py" ]; then + echo "Creating setup.py..." + cat > setup.py << EOF +#!/usr/bin/env python3 +""" +Setup script for the Racktables to NetBox migration tool +""" +from setuptools import setup, find_packages + +setup( + name="racktables-netbox-migration", + version="1.0.0", + description="Tool to migrate data from Racktables to NetBox", + author="Your Name", + author_email="your.email@example.com", + url="https://github.com/yourusername/racktables-netbox-migration", + packages=find_packages(), + install_requires=[ + "pynetbox>=6.6.0", + "python-slugify>=5.0.0", + "pymysql>=1.0.0", + "ipaddress>=1.0.0", + "requests>=2.25.0" + ], + entry_points={ + "console_scripts": [ + "migrate-racktables=migration.migrate:main", + ], + }, + python_requires=">=3.6", +) +EOF + fi + + if [ ! -f "pyproject.toml" ]; then + echo "Creating pyproject.toml..." + cat > pyproject.toml << EOF +[build-system] +requires = ["setuptools>=42", "wheel"] +build-backend = "setuptools.build_meta" +EOF + fi + + # Build the package + echo "Building package..." + python -m build + + echo "Package setup complete!" +} + +# Run the selected functions +if [[ "$SETUP_NETBOX" == "true" ]]; then + setup_netbox +fi + +if [[ "$SETUP_GITCLONE" == "true" ]]; then + setup_gitclone +fi + +if [[ "$SETUP_PACKAGE" == "true" ]]; then + setup_package +fi + +echo "Setup completed successfully!"