Simple SPARQL query interface based on the original idea of kurtjx/SNORQL and adapted from the fork eccenca/SNORQL
The purpose of this project is to develop a fully new UI implementation for Snorql that uses the latest web standards for HTML5, CSS3 and JQuery, and add new productivity features to facilitate query retrieval and sharing.
Live Demo of Snorql-UI: Demo 1 Demo 2
- Modern web UI built with HTML5, Bootstrap and JQuery.
- Responsive design with wonderful look on mobiles and tablets.
- Text editor CodeMirror for the SPARQL query with awesome features like SPARQL syntax highlighter, line numbering and bracket matching.
- SPARQL examples panel that can fetch SPARQL queries (.rq extension) from any GitHub repository on the fly and execute them against the SPARQL endpoint of your choice.
- Export query results into multiple file formats.
- Generate short URLs for your queries for easy sharing.
- No need for any backend programming language!! it is totally a front end application.
-
If you have the SPARQL queries directly inside the repo, then use the full the URL of the repo like the following:
-
But in case the SPARQL queries are inside a folder in the repository, then you need to provide a GitHub API URL for that folder and that is constructed as follows:
If the URL of the folder of the queries is this (for example):
https://github.com/egonw/SARS-CoV-2-Queries/tree/main/sparql
Then the URL template you should use is:
https://api.github.com/repos/{OWNER_USER}/{REPOSITORY_NAME}/contents/{FOLDER_PATH}
And the final URL becomes like this:
https://api.github.com/repos/egonw/SARS-CoV-2-Queries/contents/sparql
The examples panel fetches .rq files from GitHub repositories. Here's how to structure your repository:
- Use
.rqextension for SPARQL query files - Use descriptive filenames (spaces allowed):
Get all metabolites.rq - First line comment becomes the query description in the panel
Organize queries into folders by category:
sparql-queries/
├── Basic/
│ ├── List all classes.rq
│ └── Count triples.rq
├── Metabolites/
│ ├── Get all metabolites.rq
│ └── Metabolites by pathway.rq
└── Advanced/
└── Federated query example.rq
- Folders become expandable nodes in the examples panel
- Files appear as clickable query items
- Nested folders are fully supported
- Alphabetical ordering within each level
- WikiPathways: https://github.com/wikipathways/SPARQLQueries
- SARS-CoV-2: https://api.github.com/repos/egonw/SARS-CoV-2-Queries/contents/sparql
- if you want to get a URL for your query (automatically generated for example) without using the permanent link, then you can use the following JavaScript code:
// the SPARQL endpoint URL followed by the query variable 'q'
let endpoint = "https://sparql.wikipathways.org/?q=";
// The SPARQL query itself
let sparql = `SELECT DISTINCT ?dataset (str(?titleLit) as ?title) ?date ?license
WHERE {
?dataset a void:Linkset ;
dcterms:title ?titleLit .
OPTIONAL {
?dataset dcterms:license ?license ;
pav:createdOn ?date .
}
}`;
// create the URL from the endpoint URL and the URI-encoded query string
let encodedQueryUrl = endpoint + encodeURI(sparql);
// now, encodedQueryUrl can be used for your own purpose- Clone the repository
- Edit
assets/js/snorql.jsand set:_endpoint- Your SPARQL endpoint URL_examples_repo- GitHub repo with .rq example files
- Open
index.htmlin a browser or serve via any HTTP server
- Copy the example configuration files:
cp docker-compose.example.yml docker-compose.yml cp .env.example .env
- Edit
.envwith your settings (or editdocker-compose.ymldirectly) - Start the services:
docker compose up -d
- Access the UI at http://localhost:8088
The easiest way to configure Snorql-UI is with a .env file. This file serves as the single source of truth for both Docker Compose and shell scripts.
# Copy the template
cp .env.example .env
# Edit with your settings
nano .env
# Verify configuration
docker compose config
# Start services
docker compose up -dThe .env file is gitignored, so your local configuration won't be committed.
How configuration flows:
.env (single source of truth)
│
├── Docker Compose (reads .env automatically)
│ └── Container environment variables
│ └── script.sh (configures Snorql-UI at startup)
│
└── Shell scripts (via scripts/config.sh)
└── enable-cors.sh, load-rdf-example.sh, etc.
Shell scripts in scripts/ source config.sh, which automatically loads your .env file. This means you only need to edit .env once - both Docker Compose and shell scripts will use the same values.
All variables can be set in .env, exported in your shell, or hardcoded in docker-compose.yml.
Snorql-UI Settings:
| Variable | Default | Description |
|---|---|---|
SNORQL_CONTAINER |
my-snorql |
Docker container name |
SNORQL_PORT |
8088 |
HTTP port for web interface |
SNORQL_ENDPOINT |
http://localhost:8890/sparql |
SPARQL endpoint URL (as seen from browser) |
SNORQL_EXAMPLES_REPO |
- | GitHub repo with .rq example files |
SNORQL_TITLE |
My SPARQL Explorer |
Browser tab title |
DEFAULT_GRAPH |
(empty) | Default RDF graph |
Virtuoso Settings:
| Variable | Default | Description |
|---|---|---|
VIRTUOSO_CONTAINER |
my-virtuoso |
Docker container name |
VIRTUOSO_HOST |
localhost |
Hostname for external connections |
VIRTUOSO_HTTP_PORT |
8890 |
HTTP/SPARQL endpoint port |
VIRTUOSO_ISQL_PORT |
1111 |
ISQL port for data loading |
VIRTUOSO_USER |
dba |
Database admin username |
VIRTUOSO_PASSWORD |
dba123 |
Database admin password |
SPARQL_UPDATE |
false |
Allow SPARQL UPDATE queries |
CORS_ORIGINS |
* |
CORS allowed origins |
You can override the SPARQL endpoint via URL parameter:
http://localhost:8088/?endpoint=http://other-endpoint/sparql
This is useful for linking to the UI with a specific endpoint pre-configured.
# Start services in background
docker-compose up -d
# Stop services
docker-compose down
# View logs
docker-compose logs -f
# View logs for specific service
docker-compose logs -f snorql
# Rebuild after code changes
docker-compose up -d --build| Port | Service |
|---|---|
| 8088 | Snorql-UI web interface |
| 8890 | Virtuoso HTTP/SPARQL endpoint |
| 1111 | Virtuoso ISQL (for data loading) |
To persist Virtuoso data between container restarts, uncomment the volumes section in docker-compose.yml:
virtuoso:
volumes:
- ./virtuoso-data:/databaseCreate the directory first: mkdir virtuoso-data
| Change | File | What to Modify |
|---|---|---|
| SPARQL endpoint | assets/js/snorql.js |
_endpoint variable |
| Examples repo | assets/js/snorql.js |
_examples_repo variable |
| Page title | index.html |
<title> tag |
| Logo | assets/images/ |
Replace logo files |
| Footer | index.html |
Edit footer section |
| Namespaces | assets/js/namespaces.js |
snorql_namespacePrefixes object |
| Bitly token | assets/js/script.js |
accessToken (line 180) |
To customize branding:
- Replace logo images in
assets/images/ - Edit the footer section in
index.html - Update the page title in
index.html
The default logo is WikiPathways-branded. For your own deployment:
- Create a logo image (recommended: 200x50 pixels, PNG format)
- Replace
assets/images/wikipathways-snorql-logo.pngwith your logo - Or update
index.htmlline 40 to reference a different logo file
For Docker deployments, mount your custom logo:
volumes:
- ./my-logo.png:/usr/share/nginx/html/assets/images/wikipathways-snorql-logo.pngIf using the included Virtuoso container, you can load RDF data using the example script:
# See scripts/load-rdf-example.sh for detailed instructions
./scripts/load-rdf-example.shBasic Virtuoso data loading via isql:
# Connect to Virtuoso container
docker exec -it my-virtuoso isql 1111 dba dba123
# Load data from URL
SPARQL LOAD <http://example.org/data.ttl> INTO GRAPH <http://example.org/graph>;
checkpoint;
quit;For browser-based SPARQL queries to work, CORS (Cross-Origin Resource Sharing) must be enabled on Virtuoso's /sparql endpoint. Without CORS, browsers block requests from web pages (e.g., Snorql-UI at localhost:8088) to different origins (e.g., Virtuoso at localhost:8890).
After starting Virtuoso, run the CORS configuration script:
./scripts/enable-cors.shFor production, restrict CORS to your specific domain:
CORS_ORIGINS="http://yourdomain.com" ./scripts/enable-cors.shThe script supports these environment variables:
| Variable | Default | Description |
|---|---|---|
VIRTUOSO_CONTAINER |
my-virtuoso |
Docker container name |
VIRTUOSO_ISQL_PORT |
1111 |
ISQL connection port |
VIRTUOSO_USER |
dba |
Database username |
VIRTUOSO_PASSWORD |
dba123 |
Database password |
CORS_ORIGINS |
* |
Allowed origins (* = all) |
For persistent configuration, set variables in your .env file. Shell scripts automatically read this file via scripts/config.sh.
Test CORS from your browser console:
fetch('http://localhost:8890/sparql?query=SELECT+*+WHERE+{?s+?p+?o}+LIMIT+1')
.then(r => r.text())
.then(console.log)If CORS is working, you'll see SPARQL results. If not, you'll see a CORS error.
The repository includes production configuration files used by WikiPathways. These serve as both a reference implementation and operational documentation.
WikiPathways uses two instances (EP and EP2) for zero-downtime monthly data updates:
| Instance | Virtuoso HTTP | Virtuoso ISQL | Snorql HTTP | Snorql HTTPS |
|---|---|---|---|---|
| EP | 8895 | 1115 | 8085 | 449 |
| EP2 | 8891 | 1111 | 8084 | 446 |
- Odd months (Jan, Mar, May...): EP is live, data loads into EP2
- Even months (Feb, Apr, Jun...): EP2 is live, data loads into EP
- nginx reverse proxy routes traffic to the live instance
| File | Purpose |
|---|---|
docker-compose.wikipathways.yml |
EP instance configuration |
docker-compose.wikipathways-ep2.yml |
EP2 instance configuration |
scripts/wikipathways-loader.sh |
Automated monthly data loader |
# 1. Run the loader (auto-detects which instance is offline)
./scripts/wikipathways-loader.sh
# 2. Validate with GitHub Actions (run your test suite)
# 3. Switch nginx to the newly-loaded instance
sudo nano /etc/nginx/sites-enabled/wikipathways
sudo service nginx restartBefore using these files, update:
- Volume paths in docker-compose files (default:
/home/MarvinMartens/WikiPathways-EP/) DBA_PASSWORDfrom defaultdbato a secure passwordBASE_DIR_EPandBASE_DIR_EP2in the loader script
This section guides you through setting up Snorql-UI with your own RDF data and SPARQL endpoint.
your-project/
├── db/ # Virtuoso database files
│ └── data/
│ ├── load.sh # Your customized loader script
│ └── YourData.ttl # Your RDF data file(s)
├── scripts/
│ ├── load.sh.template # Template (don't modify)
│ └── your-loader.sh # Optional: automated data fetching
├── docker-compose.yml # Your configuration
├── assets/ # Snorql UI assets
├── index.html # Snorql UI entry point
└── ...
-
Copy the example configuration files:
cp docker-compose.example.yml docker-compose.yml cp .env.example .env
-
Create data directory and loader script:
mkdir -p db/data cp scripts/load.sh.template db/data/load.sh chmod +x db/data/load.sh
-
Customize the loader script (
db/data/load.sh):- Set
GRAPH_URIto your named graph (e.g.,http://yourdomain.org/data/) - Set
DATA_FILEto your RDF file name - Add your domain-specific namespace prefixes
- Set
-
Place your RDF data files in
db/data/:- Supported formats: Turtle (
.ttl), RDF/XML (.rdf), N-Triples (.nt)
- Supported formats: Turtle (
-
Configure
.envwith your settings:SNORQL_ENDPOINT- Your SPARQL endpoint URLSNORQL_EXAMPLES_REPO- Your GitHub queries repositorySNORQL_TITLE- Browser tab titleVIRTUOSO_PASSWORD- Secure password for production
-
Start the services:
docker compose up -d
-
Load your data into Virtuoso:
docker exec -it my-virtuoso /bin/bash cd /database/data ./load.sh load.log dba123 exit
-
Access the UI at http://localhost:8088
- Graph URI in
db/data/load.sh- Your named graph identifier - Namespace prefixes in
db/data/load.sh- Add your domain-specific prefixes - SNORQL_ENDPOINT in
.env- Your SPARQL endpoint URL - SNORQL_EXAMPLES_REPO in
.env- Your GitHub queries repository - SNORQL_TITLE in
.env- Browser tab title - VIRTUOSO_PASSWORD in
.env- Secure password for production - Optional:
assets/js/namespaces.js- For UI prefix expansion in results
# 1. Clone repository
git clone https://github.com/wikipathways/Snorql-UI.git my-sparql-ui
cd my-sparql-ui
# 2. Set up configuration
cp docker-compose.example.yml docker-compose.yml
cp .env.example .env
mkdir -p db/data
cp scripts/load.sh.template db/data/load.sh
# 3. Edit db/data/load.sh
# Change: GRAPH_URI="http://myproject.org/data/"
# Change: DATA_FILE="mydata.ttl"
# Add your namespace prefixes
# 4. Copy your data file
cp /path/to/mydata.ttl db/data/
# 5. Edit .env
# Change: SNORQL_ENDPOINT=http://localhost:8890/sparql
# Change: SNORQL_EXAMPLES_REPO=https://github.com/myorg/sparql-queries
# Change: SNORQL_TITLE=My SPARQL Explorer
# 6. Start and load
docker compose up -d
docker exec -it my-virtuoso /bin/bash -c "cd /database/data && ./load.sh load.log dba123"
# 7. Access at http://localhost:8088For automated/scheduled data updates, use scripts/data-loader.sh as a template. This script includes:
- Download verification - Checks each file download succeeds
- Turtle validation - Uses
rapperto validate RDF syntax before loading - Load verification - Confirms all files reached
ll_state = 2(success) - Dry-run mode - Validate without loading (
--dry-run)
# Configure for your data source
export DATA_SOURCE="http://your-data-server.org/rdf"
export DATA_FILES="mydata.ttl vocabulary.ttl"
export VIRTUOSO_CONTAINER="my-virtuoso"
export VIRTUOSO_PASSWORD="dba123"
export GRAPH_URI="http://example.org/graph/"
# Run the loader
./scripts/data-loader.sh
# Or validate only (dry run)
./scripts/data-loader.sh --dry-runEdit the script's CONFIGURATION section to set defaults for your deployment, then schedule with cron for automatic updates.
- Multiple data files: Add multiple
ld_dir()commands inload.shor use wildcards - Turtle validation: Install
raptor2-utils(sudo apt-get install raptor2-utils) for syntax validation - Federated queries: The template includes grants for SPARQL federation (SERVICE keyword)
- Namespace prefixes: Also update
assets/js/namespaces.jsso URIs display as compact QNames in the UI