Skip to content

The Mac mini docker deployment project wrenai-wren-ai-service-1 reported an error: AttributeError: 'NoneType' object has no attribute 'get_generator' #1790

@G337-BeiJing-Ratel

Description

@G337-BeiJing-Ratel

Hi team,
Now i start wren and having the error below on "wren-ai-service-1" pod

2025-06-24 20:42:46 I0624 12:42:46.623 8 wren-ai-service:190] Using OpenAI API-compatible LLM: Qwen3-14B
2025-06-24 20:42:46 I0624 12:42:46.623 8 wren-ai-service:191] Using OpenAI API-compatible LLM model kwargs: {'n': 1, 'temperature': 0.6, 'top_p': 0.95, 'top_k': 20, 'response_format': {'type': 'text'}}
2025-06-24 20:42:48 ERROR: Traceback (most recent call last):
2025-06-24 20:42:48 File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 692, in lifespan
2025-06-24 20:42:48 async with self.lifespan_context(app) as maybe_state:
2025-06-24 20:42:48 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2025-06-24 20:42:48 return await anext(self.gen)
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
2025-06-24 20:42:48 async with original_context(app) as maybe_original_state:
2025-06-24 20:42:48 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2025-06-24 20:42:48 return await anext(self.gen)
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
2025-06-24 20:42:48 async with original_context(app) as maybe_original_state:
2025-06-24 20:42:48 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2025-06-24 20:42:48 return await anext(self.gen)
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 File "/src/main.py", line 32, in lifespan
2025-06-24 20:42:48 app.state.service_container = create_service_container(pipe_components, settings)
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 File "/src/globals.py", line 53, in create_service_container
2025-06-24 20:42:48 "semantics_description": generation.SemanticsDescription(
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 File "/src/pipelines/generation/semantics_description.py", line 217, in init
2025-06-24 20:42:48 "generator": llm_provider.get_generator(
2025-06-24 20:42:48 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-24 20:42:48 AttributeError: 'NoneType' object has no attribute 'get_generator'
2025-06-24 20:42:48
2025-06-24 20:42:48 ERROR: Application startup failed. Exiting.
2025-06-24 20:54:41 INFO: Started server process [9]
2025-06-24 20:54:41 INFO: Waiting for application startup.

.env file:

`COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64

PROJECT_DIR=/Users/wangpeng/ai/wrenai/WrenAI

WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3000
IBIS_SERVER_PORT=8000
WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}

QDRANT_HOST=qdrant
SHOULD_FORCE_DEPLOY=1

OPENAI_API_KEY=

LLM_PROVIDER=openai_llm
LLM_OPENAI_API_KEY=1234ddfg
LLM_OPENAI_API_BASE=http://xx.xx.xx.xx:9000/v1
GENERATION_MODEL=Qwen3-14B

EMBEDDER_PROVIDER=openai_like_embedder
EMBEDDING_MODEL=BAAI/bge-m3
EMBEDDING_MODEL_DIMENSION=1024
EMBEDDER_OPENAI_API_KEY=sk-xxxxxxxxxxxxxxx
EMBEDDER_OPENAI_API_BASE=https://api.siliconflow.cn/v1/embeddings

WREN_PRODUCT_VERSION=0.24.0
WREN_ENGINE_VERSION=0.16.4
WREN_AI_SERVICE_VERSION=0.24.0
IBIS_SERVER_VERSION=0.16.4
WREN_UI_VERSION=0.29.2
WREN_BOOTSTRAP_VERSION=0.1.5

USER_UUID=

POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE
POSTHOG_HOST=https://app.posthog.com
TELEMETRY_ENABLED=true

GENERATION_MODEL=gpt-4o-mini
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=

HOST_PORT=3000
AI_SERVICE_FORWARD_PORT=5555

EXPERIMENTAL_ENGINE_RUST_VERSION=false
`

config.yaml file:
`type: llm
provider: openai_llm
models:

  • api_base: http://2xx.xx9.xxx.xx:9000/v1
    model: Qwen3-14B
    alias: default
    timeout: 600
    kwargs:
    n: 1
    temperature: 0.6 # Recommended for thinking mode
    top_p: 0.95
    top_k: 20
    response_format:
    type: text

type: embedder
provider: openai_like_embedder
models:


type: engine
provider: wren_ui
endpoint: http://wren-ui:3000


type: engine
provider: wren_ibis
endpoint: http://ibis-server:8000


type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 3072 # put your embedding model dimension here
timeout: 120
recreate_index: true


type: pipeline
pipes:

  • name: db_schema_indexing
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: historical_question_indexing
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: table_description_indexing
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: db_schema_retrieval
    llm: openai_llm.Qwen3-14B
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: historical_question_retrieval
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: sql_generation
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
  • name: sql_correction
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
    document_store: qdrant
  • name: followup_sql_generation
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
  • name: sql_answer
    llm: openai_llm.Qwen3-14B
  • name: semantics_description
    llm: openai_llm.Qwen3-14B
  • name: relationship_recommendation
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
  • name: question_recommendation
    llm: openai_llm.Qwen3-14B
  • name: question_recommendation_db_schema_retrieval
    llm: openai_llm.Qwen3-14B
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: question_recommendation_sql_generation
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
  • name: chart_generation
    llm: openai_llm.Qwen3-14B
  • name: chart_adjustment
    llm: openai_llm.Qwen3-14B
  • name: intent_classification
    llm: openai_llm.Qwen3-14B
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: misleading_assistance
    llm: openai_llm.Qwen3-14B
  • name: data_assistance
    llm: litellm_llm.qwen3-fast
  • name: sql_pairs_indexing
    document_store: qdrant
    embedder: openai_like_embedder.BAAI/bge-m3
  • name: sql_pairs_retrieval
    document_store: qdrant
    embedder: openai_like_embedder.BAAI/bge-m3
    llm: openai_llm.Qwen3-14B
  • name: preprocess_sql_data
    llm: openai_llm.Qwen3-14B
  • name: sql_executor
    engine: wren_ui
  • name: user_guide_assistance
    llm: openai_llm.Qwen3-14B
  • name: sql_question_generation
    llm: openai_llm.Qwen3-14B
  • name: sql_generation_reasoning
    llm: litellm_llm.qwen3-thinking
  • name: followup_sql_generation_reasoning
    llm: litellm_llm.qwen3-thinking
  • name: sql_regeneration
    llm: openai_llm.Qwen3-14B
    engine: wren_ui
  • name: instructions_indexing
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: instructions_retrieval
    embedder: openai_like_embedder.BAAI/bge-m3
    document_store: qdrant
  • name: sql_functions_retrieval
    engine: wren_ibis
    document_store: qdrant
  • name: project_meta_indexing
    document_store: qdrant
  • name: sql_tables_extraction
    llm: openai_llm.Qwen3-14B

settings:
engine_timeout: 30
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_intent_classification: true
allow_sql_generation_reasoning: true
allow_sql_functions_retrieval: true
enable_column_pruning: false
max_sql_correction_retries: 3
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
historical_question_retrieval_similarity_threshold: 0.9
sql_pairs_similarity_threshold: 0.7
sql_pairs_retrieval_max_size: 10
instructions_similarity_threshold: 0.7
instructions_top_k: 10
`

docker-compose.yaml :
`volumes:
data:

networks:
wren:
driver: bridge

services:
bootstrap:
image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DATA_PATH: /app/data
volumes:
- ${PROJECT_DIR}/data:/app/data
command: /bin/sh /app/init.sh

wren-engine:
image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_ENGINE_PORT}
- ${WREN_ENGINE_SQL_PORT}
volumes:
- ${PROJECT_DIR}/data:/usr/src/app/etc
- ${PROJECT_DIR}/data:/app/data
networks:
- wren
depends_on:
- bootstrap

ibis-server:
image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${IBIS_SERVER_PORT}
environment:
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
networks:
- wren

wren-ai-service:
image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_AI_SERVICE_PORT}
ports:
- ${AI_SERVICE_FORWARD_PORT}:${WREN_AI_SERVICE_PORT}
environment:
PYTHONUNBUFFERED: 1
CONFIG_PATH: /app/data/config.yaml
env_file:
- ${PROJECT_DIR}/docker/.env
volumes:
- ${PROJECT_DIR}/docker/config.yaml:/app/data/config.yaml
- ${PROJECT_DIR}/wren-ai-service/src:/src
networks:
- wren
depends_on:
- qdrant

qdrant:
image: qdrant/qdrant:v1.11.0
restart: on-failure
expose:
- 6333
- 6334
volumes:
- ${PROJECT_DIR}/data:/qdrant/storage
networks:
- wren

wren-ui:
image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DB_TYPE: sqlite
SQLITE_FILE: /app/data/db.sqlite3
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT}
IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT}

  GENERATION_MODEL: ${GENERATION_MODEL}

  WREN_ENGINE_PORT: ${WREN_ENGINE_PORT}
  WREN_AI_SERVICE_VERSION: ${WREN_AI_SERVICE_VERSION}
  WREN_UI_VERSION: ${WREN_UI_VERSION}
  WREN_ENGINE_VERSION: ${WREN_ENGINE_VERSION}
  USER_UUID: ${USER_UUID}
  POSTHOG_API_KEY: ${POSTHOG_API_KEY}
  POSTHOG_HOST: ${POSTHOG_HOST}
  TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
  
  NEXT_PUBLIC_USER_UUID: ${USER_UUID}
  NEXT_PUBLIC_POSTHOG_API_KEY: ${POSTHOG_API_KEY}
  NEXT_PUBLIC_POSTHOG_HOST: ${POSTHOG_HOST}
  NEXT_PUBLIC_TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
  EXPERIMENTAL_ENGINE_RUST_VERSION: ${EXPERIMENTAL_ENGINE_RUST_VERSION}
  
  WREN_PRODUCT_VERSION: ${WREN_PRODUCT_VERSION}
ports:
  - ${HOST_PORT}:3000
volumes:
  - ${PROJECT_DIR}/data:/app/data
networks:
  - wren
depends_on:
  - wren-ai-service
  - wren-engine:

`
The version I'm using is 0.24,

Image

Could you please take a look for me? thank you

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions