-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Describe the bug
I deployed ollama in windows 11, and config the embedding model as below:
type: embedder
provider: litellm_embedder
models:
- model: ollama/nomic-embed-text:lastest
alias: default
api_base: http://host.docker.internal:11434
timeout: 120
But I get error as:
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2270, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
To Reproduce
Steps to reproduce the behavior:
- Run ollama with default configurations
- pull the llm model and nomic-embed-text:latest
- Configurate WrenAI according to https://docs.getwren.ai/oss/ai_service/guide/custom_llm
- Start WrenAI
- See the error in container wrenai-wren-ai-service-1
Expected behavior
The container wrenai-wren-ai-service starts properly
Screenshots
-- Following message says the ollama can be accessed in docker
/mnt/c/Users/plowa/.wrenai$ docker run --rm curlimages/curl -H 'Content-Type: application/json' http://host.docker.internal:11434/api/tags
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"models":[{"name":"nomic-embed-text:latest","model":"nomic
-embed-text:latest","modified_at":"2025-08-20T13:59:38.0871564+08:00","size":274302450,"digest":"0a109f422b47e3a30ba2b10eca18548e944e8a23
073ee3f3e947efcf3c45e59f","details":{"parent_model":"","format":"gguf","family":"nomic-bert","families":["nomic-bert"],"parameter_size":"
137M","quantization_level":"F16"}},{"name":"gemma3:270m","model":"gemma3:270m","modified_at":"2025-08-20T12:40:54.0908435+08:00","size":2
91554930,"digest":"e7d36fb2c3b3293cfe56d55889867a064b3a2b22e98335f2e6e8a387e081d6be","details":{"parent_model":"","format":"gguf","family
":"gemma3","families":["gemma3"],"parameter_size":"268.10M","quantization_level":"Q8_0"}},{"name":"qwen3:4b","model":"qwen3:4b","modified
_at":"2025-08-19T15:30:22.9632116+08:00","size":2497293918,"digest":"e55aed6fe643f9368b2f48f8aaa56ec787b75765da69f794c0a0c23bfe7c64b2","d
etails":{"parent_model":"","format":"gguf","family":"qwen3","families":["qwen3"],"parameter_size":"4.0B","quantization_level":"Q4_K_M"}},
{"name":"qwen2.5:7b-instruct","model":"qwen2.5:7b-instruct","modified_at":"2025-07-26T20:48:29.3137526+08:00","size":4683087332,"digest":
"845dbda0ea48ed749caafd9e6037047aa19acfcfd82e704d7ca97d631a0b697e","details":{"parent_model":"","format":"gguf","family":"qwen2","familie
s":["qwen2"],"parameter_size":"7.6B","quantization_level":"Q4_K_M"}},{"name":"gemma3:4b","model":"gemma3:4b","modified_at":"2025-07-25T23
:08:29.2829488+08:00","size":3338801804,"digest":"a2af6cc3eb7fa8be8504abaf9b04e88f17a119ec3f04a3addf55f92841195f5a","details":{"parent_mo
100 1687 100 1687 0 0 198k 0 --:--:-- --:--:-- --:--:-- 205ke":"4.3B","quantization_level":"Q4_K_M"}}]}
/mnt/c/Users/plowa/.wrenai$ docker run --rm curlimages/curl -H 'Content-Type: application/json' -d '{"model": "nomic-embed-text:latest", "prompt": "Hello world"}' http://host.docker.internal:11434/api/embed
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 113 100 51 100 62 55 67 --:--:-- --:--:-- --:--:-- 122
{"model":"nomic-embed-text:latest","embeddings":[]}
--- But I get 404 not found
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3562, in aembedding
response = await init_response # type: ignore
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 87, in ollama_aembeddings
response = await litellm.module_level_aclient.post(url=api_base, json=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 324, in post
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 280, in post
response.raise_for_status()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
Desktop (please complete the following information):
- OS: [Windows11]
- Browser [edge]
Wren AI Information
- Version: [0.27.0]
Additional context
Add any other context about the problem here.
Relevant log output
config.yaml
type: llm
provider: litellm_llm
models:
- api_base: http://host.docker.internal:11434/v1
alias: default
model: ollama_chat/qwen3:4b
timeout: 600
kwargs:
n: 1
temperature: 0
type: embedder
provider: litellm_embedder
models:
- model: ollama/nomic-embed-text:lastest
alias: default
api_base: http://host.docker.internal:11434
timeout: 120
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000
type: engine
provider: wren_ibis
endpoint: http://ibis-server:8000
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768
timeout: 120
recreate_index: true
type: pipeline
pipes:
- name: db_schema_indexing
embedder: litellm_embedder.default
document_store: qdrant - name: historical_question_indexing
embedder: litellm_embedder.default
document_store: qdrant - name: table_description_indexing
embedder: litellm_embedder.default
document_store: qdrant - name: db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant - name: historical_question_retrieval
embedder: litellm_embedder.default
document_store: qdrant - name: sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant - name: sql_correction
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant - name: followup_sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant - name: sql_answer
llm: litellm_llm.default - name: semantics_description
llm: litellm_llm.default - name: relationship_recommendation
llm: litellm_llm.default
engine: wren_ui - name: question_recommendation
llm: litellm_llm.default - name: question_recommendation_db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant - name: question_recommendation_sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant - name: intent_classification
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant - name: misleading_assistance
llm: litellm_llm.default - name: data_assistance
llm: litellm_llm.default - name: sql_pairs_indexing
document_store: qdrant
embedder: litellm_embedder.default - name: sql_pairs_retrieval
document_store: qdrant
embedder: litellm_embedder.default
llm: litellm_llm.default - name: preprocess_sql_data
llm: litellm_llm.default - name: sql_executor
engine: wren_ui - name: chart_generation
llm: litellm_llm.default - name: chart_adjustment
llm: litellm_llm.default - name: user_guide_assistance
llm: litellm_llm.default - name: sql_question_generation
llm: litellm_llm.default - name: sql_generation_reasoning
llm: litellm_llm.default - name: followup_sql_generation_reasoning
llm: litellm_llm.default - name: sql_regeneration
llm: litellm_llm.default
engine: wren_ui - name: instructions_indexing
embedder: litellm_embedder.default
document_store: qdrant - name: instructions_retrieval
embedder: litellm_embedder.default
document_store: qdrant - name: sql_functions_retrieval
engine: wren_ibis
document_store: qdrant - name: project_meta_indexing
document_store: qdrant - name: sql_tables_extraction
llm: litellm_llm.default
settings:
doc_endpoint: https://docs.getwren.ai
is_oss: true
engine_timeout: 30
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_intent_classification: true
allow_sql_generation_reasoning: true
allow_sql_functions_retrieval: true
enable_column_pruning: false
max_sql_correction_retries: 3
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: false
historical_question_retrieval_similarity_threshold: 0.9
sql_pairs_similarity_threshold: 0.7
sql_pairs_retrieval_max_size: 10
instructions_similarity_threshold: 0.7
instructions_top_k: 10
Starting Ibis Server...
WREN_NUM_WORKERS is not set. Using default value of 2.
Number of workers: 2
[2025-08-19 06:08:29 +0000] [7] [INFO] Starting gunicorn 23.0.0
[2025-08-19 06:08:29 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2025-08-19 06:08:29 +0000] [7] [INFO] Using worker: app.worker.WrenUvicornWorker
[2025-08-19 06:08:29 +0000] [8] [INFO] Booting worker with pid: 8
[2025-08-19 06:08:29 +0000] [9] [INFO] Booting worker with pid: 9
Starting Ibis Server...
WREN_NUM_WORKERS is not set. Using default value of 2.
Number of workers: 2
[2025-08-20 05:06:24 +0000] [7] [INFO] Starting gunicorn 23.0.0
[2025-08-20 05:06:24 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2025-08-20 05:06:24 +0000] [7] [INFO] Using worker: app.worker.WrenUvicornWorker
[2025-08-20 05:06:24 +0000] [8] [INFO] Booting worker with pid: 8
[2025-08-20 05:06:24 +0000] [9] [INFO] Booting worker with pid: 9
Starting Ibis Server...
WREN_NUM_WORKERS is not set. Using default value of 2.
Number of workers: 2
[2025-08-21 01:25:36 +0000] [6] [INFO] Starting gunicorn 23.0.0
[2025-08-21 01:25:36 +0000] [6] [INFO] Listening at: http://0.0.0.0:8000 (6)
[2025-08-21 01:25:36 +0000] [6] [INFO] Using worker: app.worker.WrenUvicornWorker
[2025-08-21 01:25:36 +0000] [7] [INFO] Booting worker with pid: 7
[2025-08-21 01:25:36 +0000] [8] [INFO] Booting worker with pid: 8
Starting Ibis Server...
WREN_NUM_WORKERS is not set. Using default value of 2.
Number of workers: 2
[2025-08-22 05:43:47 +0000] [7] [INFO] Starting gunicorn 23.0.0
[2025-08-22 05:43:47 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2025-08-22 05:43:47 +0000] [7] [INFO] Using worker: app.worker.WrenUvicornWorker
[2025-08-22 05:43:47 +0000] [8] [INFO] Booting worker with pid: 8
[2025-08-22 05:43:47 +0000] [9] [INFO] Booting worker with pid: 9
Starting Ibis Server...
WREN_NUM_WORKERS is not set. Using default value of 2.
Number of workers: 2
[2025-08-22 11:35:02 +0000] [7] [INFO] Starting gunicorn 23.0.0
[2025-08-22 11:35:02 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2025-08-22 11:35:02 +0000] [7] [INFO] Using worker: app.worker.WrenUvicornWorker
[2025-08-22 11:35:02 +0000] [8] [INFO] Booting worker with pid: 8
[2025-08-22 11:35:02 +0000] [9] [INFO] Booting worker with pid: 9
AI-SERVICE LOG
0%| | 0/1 [00:00<?, ?it/s]W0822 13:59:38.967 8 wren-ai-service:291] Calling QdrantDocumentStore.write_documents() with empty list
INFO: 172.18.0.6:34744 - "GET /v1/semantics-preparations/f91a37d52b86f0e302421d752955d7a41f7509d1/status HTTP/1.1" 200 OK
100it [00:01, 77.18it/s]
100it [00:01, 77.15it/s]
embedding [src.pipelines.indexing.table_description.embedding()] encountered an error<
Node inputs:
{'chunk': "<Task finished name='Task-25' coro=<AsyncGraphAdap...",
'embedder': '<src.providers.embedder.litellm.AsyncDocumentEmbed...'}
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3562, in aembedding
response = await init_response # type: ignore
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 87, in ollama_aembeddings
response = await litellm.module_level_aclient.post(url=api_base, json=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 324, in post
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 280, in post
response.raise_for_status()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
self._handle_exception(observation, e)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception
raise e
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/pipelines/indexing/table_description.py", line 97, in embedding
return await embedder.run(documents=chunk["documents"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry
ret = await target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 154, in run
embeddings, meta = await self._embed_batch(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 115, in _embed_batch
responses = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch
return await aembedding(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2270, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
Oh no an error! Need help with Hamilton?
Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
E0822 13:59:40.161 8 wren-ai-service:100] Failed to prepare semantics: litellm.APIConnectionError: OllamaException - Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3562, in aembedding
response = await init_response # type: ignore
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 87, in ollama_aembeddings
response = await litellm.module_level_aclient.post(url=api_base, json=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 324, in post
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 280, in post
response.raise_for_status()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/web/v1/services/semantics_preparation.py", line 92, in prepare_semantics
await asyncio.gather(*tasks)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
self._handle_exception(observation, e)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception
raise e
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/pipelines/indexing/table_description.py", line 153, in run
return await self._pipe.execute(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 375, in execute
raise e
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 366, in execute
outputs = await self.raw_execute(_final_vars, overrides, display_graph, inputs=inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 326, in raw_execute
raise e
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 321, in raw_execute
results = await await_dict_of_tasks(task_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks
coroutines_gathered = await asyncio.gather(*coroutines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value
return await val
^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 91, in new_fn
fn_kwargs = await await_dict_of_tasks(task_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks
coroutines_gathered = await asyncio.gather(*coroutines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value
return await val
^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 91, in new_fn
fn_kwargs = await await_dict_of_tasks(task_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 23, in await_dict_of_tasks
coroutines_gathered = await asyncio.gather(*coroutines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 36, in process_value
return await val
^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
self._handle_exception(observation, e)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception
raise e
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/pipelines/indexing/table_description.py", line 97, in embedding
return await embedder.run(documents=chunk["documents"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry
ret = await target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 154, in run
embeddings, meta = await self._embed_batch(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 115, in _embed_batch
responses = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch
return await aembedding(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2270, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
INFO: 172.18.0.6:34748 - "GET /v1/semantics-preparations/f91a37d52b86f0e302421d752955d7a41f7509d1/status HTTP/1.1" 200 OK
Forcing deployment: {'data': {'deploy': {'status': 'FAILED', 'error': 'Wren AI Error: deployment hash:f91a37d52b86f0e302421d752955d7a41f7509d1, [object Object]'}}}
embedding [src.pipelines.indexing.db_schema.embedding()] encountered an error<
Node inputs:
{'chunk': "<Task finished name='Task-11' coro=<AsyncGraphAdap...",
'embedder': '<src.providers.embedder.litellm.AsyncDocumentEmbed...'}
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3562, in aembedding
response = await init_response # type: ignore
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 87, in ollama_aembeddings
response = await litellm.module_level_aclient.post(url=api_base, json=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 324, in post
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 280, in post
response.raise_for_status()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 219, in async_wrapper
self._handle_exception(observation, e)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 520, in _handle_exception
raise e
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 217, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/pipelines/indexing/db_schema.py", line 313, in embedding
return await embedder.run(documents=chunk["documents"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry
ret = await target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 154, in run
embeddings, meta = await self._embed_batch(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 115, in _embed_batch
responses = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/embedder/litellm.py", line 101, in embed_single_batch
return await aembedding(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1586, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1437, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3577, in aembedding
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2270, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - Client error '404 Not Found' for url 'http://host.docker.internal:11434/api/embed'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
Oh no an error! Need help with Hamilton?
Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
ENGINE LOG
Aug 22, 2025 11:35:01 AM io.airlift.log.Logger info
INFO: Java version: 21.0.8
2025-08-22T11:35:01.564Z INFO main io.airlift.log.Logging Logging to stderr
2025-08-22T11:35:01.592Z INFO main Bootstrap Loading configuration
2025-08-22T11:35:02.314Z INFO main org.hibernate.validator.internal.util.Version HV000001: Hibernate Validator 9.0.1.Final
2025-08-22T11:35:02.926Z INFO main Bootstrap Initializing logging
2025-08-22T11:35:03.960Z INFO main Bootstrap PROPERTY DEFAULT RUNTIME DESCRIPTION
2025-08-22T11:35:03.960Z INFO main Bootstrap http-server.compression.enabled true true
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http2.session-receive-window-size 16MB 16MB Initial size of session's flow control receive window for HTTP/2
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http2.stream-receive-window-size 16MB 16MB Initial size of stream's flow control receive window for HTTP/2
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http2.input-buffer-size 8kB 8kB Size of the buffer used to read from the network for HTTP/2
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http2.max-concurrent-streams 16384 16384 Maximum concurrent streams per connection for HTTP/2
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http2.stream-idle-timeout 15.00s 15.00s
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.accept-queue-size 8000 8000
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http.acceptor-threads ---- ----
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http.enabled true true
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http.port 8080 8080
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.http.selector-threads ---- ----
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.https.acceptor-threads ---- ----
2025-08-22T11:35:03.961Z INFO main Bootstrap http-server.https.enabled false false
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.https.selector-threads ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.compression.enabled true true
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.enabled true true
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.max-history 15 15
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.immediate-flush false false
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.max-size 100MB 100MB
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.path var/log/http-request.log var/log/http-request.log
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.log.queue-size 10000 10000
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.max-request-header-size ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.max-response-header-size ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.threads.max 200 200
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.threads.min 2 2
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.net.max-idle-time 200.00s 200.00s
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.output-buffer-size ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.process-forwarded REJECT REJECT Process Forwarded and X-Forwarded headers (for proxied environments)
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.show-stack-trace true true Show the stack trace when generating an error response
2025-08-22T11:35:03.962Z INFO main Bootstrap http-server.threads.max-idle-time 1.00m 1.00m
2025-08-22T11:35:03.962Z INFO main Bootstrap node.annotation-file ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap node.binary-spec ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap node.config-spec ---- ----
2025-08-22T11:35:03.962Z INFO main Bootstrap node.environment ---- production
2025-08-22T11:35:03.963Z INFO main Bootstrap node.internal-address-source IP IP
2025-08-22T11:35:03.963Z INFO main Bootstrap node.location ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap node.bind-ip ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap node.external-address ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap node.id ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap node.internal-address ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap node.pool general general
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.cache-task-retry-delay 60 60
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.home-directory ---- ----
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.max-cache-query-timeout 20 20
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.max-concurrent-tasks 10 10
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.memory-limit 268435456B 268435456B
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.temp-directory /tmp/duck /tmp/duck
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.connector.init-sql-path etc/duckdb/init.sql etc/duckdb/init.sql
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.connector.session-sql-path etc/duckdb/session.sql etc/duckdb/session.sql
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.bucket-name ---- ---- The Google Cloud bucket name used to temporarily store the metric cached results
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.credentials-file ---- ---- The path to the JSON credentials file
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.credentials-key [REDACTED] [REDACTED] The base64 encoded credentials key
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.location ---- ---- The Google Cloud Project ID where the data reside
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.metadata.schema.prefix Wren needs to create two schemas in BigQuery: wren_temp, pg_catalog. This is a config to add a prefix to the names of these two schemas if it's set.
2025-08-22T11:35:03.963Z INFO main Bootstrap bigquery.project-id ---- ---- The Google Cloud Project ID where the data reside
2025-08-22T11:35:03.963Z INFO main Bootstrap duckdb.storage.access-key [REDACTED] [REDACTED] The storage access key
2025-08-22T11:35:03.964Z INFO main Bootstrap duckdb.storage.endpoint storage.googleapis.com storage.googleapis.com The storage endpoint; default is storage.googleapis.com
2025-08-22T11:35:03.964Z INFO main Bootstrap duckdb.storage.region ---- ---- The storage region
2025-08-22T11:35:03.964Z INFO main Bootstrap duckdb.storage.secret-key [REDACTED] [REDACTED] The storage secret key
2025-08-22T11:35:03.964Z INFO main Bootstrap duckdb.storage.url-style path path The storage url style; default is path
2025-08-22T11:35:03.964Z INFO main Bootstrap postgres.jdbc.url ---- ----
2025-08-22T11:35:03.964Z INFO main Bootstrap postgres.password ---- ----
2025-08-22T11:35:03.964Z INFO main Bootstrap postgres.user ---- ----
2025-08-22T11:35:03.964Z INFO main Bootstrap pg-wire-protocol.auth.file etc/accounts etc/accounts
2025-08-22T11:35:03.964Z INFO main Bootstrap pg-wire-protocol.netty.thread.count 0 0
2025-08-22T11:35:03.964Z INFO main Bootstrap pg-wire-protocol.enabled false false
2025-08-22T11:35:03.964Z INFO main Bootstrap pg-wire-protocol.port 7432 7432
2025-08-22T11:35:03.964Z INFO main Bootstrap pg-wire-protocol.ssl.enabled false false
2025-08-22T11:35:03.964Z INFO main Bootstrap sqlglot.port 8000 8000
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.database ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.jdbc.url ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.password ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.role ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.schema ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.user ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap snowflake.warehouse ---- ----
2025-08-22T11:35:03.965Z INFO main Bootstrap wren.datasource.type DUCKDB DUCKDB
2025-08-22T11:35:03.965Z INFO main Bootstrap wren.experimental-enable-dynamic-fields true true
2025-08-22T11:35:03.965Z INFO main Bootstrap wren.directory etc/mdl etc/mdl
2025-08-22T11:35:05.068Z INFO main io.wren.base.client.duckdb.DuckdbClient Append session SQL to connection init SQL
2025-08-22T11:35:05.079Z INFO main com.zaxxer.hikari.HikariDataSource DUCKDB_POOL - Starting...
2025-08-22T11:35:05.094Z INFO main com.zaxxer.hikari.pool.PoolBase DUCKDB_POOL - Driver does not support get/set network timeout for connections. (getNetworkTimeout)
2025-08-22T11:35:05.106Z INFO main com.zaxxer.hikari.pool.HikariPool DUCKDB_POOL - Added connection org.duckdb.DuckDBConnection@773c0293
2025-08-22T11:35:05.107Z INFO main com.zaxxer.hikari.HikariDataSource DUCKDB_POOL - Start completed.
2025-08-22T11:35:05.108Z INFO main io.wren.base.client.duckdb.DuckdbClient Initialize by init SQL
2025-08-22T11:36:24.356Z INFO main com.zaxxer.hikari.HikariDataSource DUCKDB_POOL - Starting...
2025-08-22T11:36:24.356Z INFO main com.zaxxer.hikari.pool.PoolBase DUCKDB_POOL - Driver does not support get/set network timeout for connections. (getNetworkTimeout)
2025-08-22T11:36:24.359Z INFO main com.zaxxer.hikari.pool.HikariPool DUCKDB_POOL - Added connection org.duckdb.DuckDBConnection@2b8d084
2025-08-22T11:36:24.359Z INFO main com.zaxxer.hikari.HikariDataSource DUCKDB_POOL - Start completed.
2025-08-22T11:36:24.360Z INFO main io.wren.base.client.duckdb.DuckdbClient Set memory limit to 268435456B
2025-08-22T11:36:24.361Z INFO main io.wren.base.client.duckdb.DuckdbClient Set temp directory to /tmp/duck
2025-08-22T11:36:24.587Z INFO main org.eclipse.jetty.server.Server jetty-12.0.23; built: 2025-07-02T14:02:02.445Z; git: 01a4119797e9cee53c974ae126cc316d0c8a533a; jvm 21.0.8+9-LTS
2025-08-22T11:36:24.620Z INFO main org.eclipse.jetty.server.handler.ContextHandler Started oeje10s.ServletContextHandler@10cd6753{ROOT,/,b=null,a=AVAILABLE,vh=[@https,@http],h=GzipHandler@71ad3d8a{STARTED,min=32,inflate=-1}}
2025-08-22T11:36:25.011Z WARN main org.glassfish.jersey.internal.Errors The following warnings have been detected: WARNING: A HTTP GET method, public void io.wren.main.web.AnalysisResourceV2.getSqlAnalysisBatch(io.wren.main.web.dto.SqlAnalysisInputBatchDto,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.AnalysisResourceV2.getSqlAnalysis(io.wren.main.web.dto.SqlAnalysisInputDtoV2,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.AnalysisResource.getSqlAnalysis(io.wren.main.web.dto.SqlAnalysisInputDto,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.MDLResourceV2.dryPlan(io.wren.main.web.dto.DryPlanDtoV2,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.MDLResource.dryRun(io.wren.main.web.dto.PreviewDto,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.MDLResource.preview(io.wren.main.web.dto.PreviewDto,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
WARNING: A HTTP GET method, public void io.wren.main.web.MDLResource.dryPlan(io.wren.main.web.dto.DryPlanDto,jakarta.ws.rs.container.AsyncResponse), should not consume any entity.
2025-08-22T11:36:25.013Z INFO main org.eclipse.jetty.ee10.servlet.ServletContextHandler Started oeje10s.ServletContextHandler@10cd6753{ROOT,/,b=null,a=AVAILABLE,vh=[@https,@http],h=GzipHandler@71ad3d8a{STARTED,min=32,inflate=-1}}
2025-08-22T11:36:25.021Z INFO main org.eclipse.jetty.server.AbstractConnector Started http@c2df90e{HTTP/1.1, (http/1.1, h2c)}{0.0.0.0:8080}
2025-08-22T11:36:25.038Z INFO main org.eclipse.jetty.server.Server Started oejs.Server@3c35c345{STARTING}[12.0.23,sto=0] @83927ms
2025-08-22T11:36:25.713Z INFO pool-1-thread-1 io.wren.main.PreviewService Planned SQL: WITH
"Customer" AS (
SELECT "Customer"."custkey" "custkey"
FROM
(
SELECT "Customer"."custkey" "custkey"
FROM
(
SELECT "custkey" "custkey"
FROM
"tpch"."tiny"."customer" "Customer"
) "Customer"
) "Customer"
)
, "Orders" AS (
SELECT
"Orders"."orderkey" "orderkey"
, "Orders"."custkey" "custkey"
, "Orders"."double_key" "double_key"
, "Orders_relationsub"."customer_key" "customer_key"
FROM
(
SELECT
"Orders"."orderkey" "orderkey"
, "Orders"."custkey" "custkey"
, (orderkey * 2) "double_key"
FROM
(
SELECT
"orderkey" "orderkey"
, "custkey" "custkey"
FROM
(
SELECT *
FROM
tpch.tiny.orders
) "Orders"
) "Orders"
) "Orders"
LEFT JOIN (
SELECT
"Orders"."null"
, "Customer"."custkey" "customer_key"
FROM
(
SELECT
"orderkey" "orderkey"
, "custkey" "custkey"
FROM
(
SELECT *
FROM
tpch.tiny.orders
) "Orders"
) "Orders"
LEFT JOIN "Customer" ON ("Orders"."custkey" = "Customer"."custkey")
) "Orders_relationsub" ON ("Orders"."null" = "Orders_relationsub"."null")
)
SELECT
orderkey
, double_key
, customer_key
FROM
Orders
2025-08-22T11:36:25.713Z INFO pool-1-thread-1 io.wren.main.PreviewService Warm up done
2025-08-22T11:36:25.713Z INFO main io.wren.main.server.Server ======== SERVER STARTED ========
UI LOG
Using SQLite
Already up to date
▲ Next.js 14.2.30
- Local: http://localhost:3000
- Network: http://0.0.0.0:3000
✓ Starting...
✓ Ready in 281ms
�[32m[2025-08-22T06:02:16.459] [INFO] TELEMETRY - �[39mTelemetry initialized: a5fa4cd8-8599-4d7d-81ff-03540868ea0f
using sqlite
�[32m[2025-08-22T06:02:16.510] [INFO] PRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T06:02:16.510] [INFO] AskingService - �[39mBackground tracker started
�[32m[2025-08-22T06:02:16.510] [INFO] ChartBackgroundTracker - �[39mChart background tracker started
�[32m[2025-08-22T06:02:16.511] [INFO] ChartBackgroundTracker - �[39mChart adjustment background tracker started
�[32m[2025-08-22T06:02:16.511] [INFO] TRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T06:02:16.511] [INFO] PRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T06:02:16.511] [INFO] TRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T06:02:16.512] [INFO] DashboardCacheBackgroundTracker - �[39mDashboard cache background tracker started
�[32m[2025-08-22T06:02:16.517] [INFO] AskingService - �[39mInitialization: adding unfininshed breakdown thread responses (total: 0) to background tracker
Persisted queries are enabled and are using an unbounded cache. Your server is vulnerable to denial of service attacks via memory exhaustion. Set cache: "bounded"
or persistedQueries: false
in your ApolloServer constructor, or see https://go.apollo.dev/s/cache-backends for other alternatives.
sendEvent graphql_error_failed {
originalErrorStack: undefined,
originalErrorMessage: undefined,
errorMessage: "Cannot read properties of null (reading 'hash')"
} UNKNOWN false
Using SQLite
Already up to date
▲ Next.js 14.2.30
- Local: http://localhost:3000
- Network: http://0.0.0.0:3000
✓ Starting...
✓ Ready in 684ms
�[32m[2025-08-22T11:50:42.618] [INFO] TELEMETRY - �[39mTelemetry initialized: 302f8db3-ac0d-4f5d-8ed2-2b9d15f63290
using sqlite
�[32m[2025-08-22T11:50:42.676] [INFO] PRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T11:50:42.676] [INFO] AskingService - �[39mBackground tracker started
�[32m[2025-08-22T11:50:42.676] [INFO] ChartBackgroundTracker - �[39mChart background tracker started
�[32m[2025-08-22T11:50:42.677] [INFO] ChartBackgroundTracker - �[39mChart adjustment background tracker started
�[32m[2025-08-22T11:50:42.677] [INFO] TRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T11:50:42.677] [INFO] PRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T11:50:42.677] [INFO] TRQ Background Tracker - �[39mRecommend question background tracker started
�[32m[2025-08-22T11:50:42.677] [INFO] DashboardCacheBackgroundTracker - �[39mDashboard cache background tracker started
�[32m[2025-08-22T11:50:42.685] [INFO] AskingService - �[39mInitialization: adding unfininshed breakdown thread responses (total: 0) to background tracker
Persisted queries are enabled and are using an unbounded cache. Your server is vulnerable to denial of service attacks via memory exhaustion. Set cache: "bounded"
or persistedQueries: false
in your ApolloServer constructor, or see https://go.apollo.dev/s/cache-backends for other alternatives.
�[36m[2025-08-22T11:50:42.795] [DEBUG] DeployService - �[39mDeploying model, hash: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T11:50:42.845] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploying wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1, deployId: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T11:50:42.858] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:50:43.862] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:50:45.865] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:50:48.869] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:50:52.873] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:50:57.800] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T11:51:03.804] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
sendEvent modeling_deploy_mdl_failed {
mdl: {
schema: 'public',
catalog: 'wrenai',
dataSource: 'DUCKDB',
models: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
relationships: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
views: [],
enumDefinitions: undefined
},
error: 'Wren AI: Deploy wren AI failed or timeout, hash: f91a37d52b86f0e302421d752955d7a41f7509d1'
} AI false
�[36m[2025-08-22T12:12:04.330] [DEBUG] DeployService - �[39mDeploying model, hash: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T12:12:04.379] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploying wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1, deployId: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T12:12:04.392] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T12:12:06.175] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T12:12:08.185] [DEBUG] WrenAIAdaptor - �[39mGot error in API /v1/semantics-preparations/f91a37d52b86f0e302421d752955d7a41f7509d1/status: [object Object]
�[36m[2025-08-22T12:12:08.185] [DEBUG] WrenAIAdaptor - �[39mGot error when deploying to wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1. Error: [object Object]
sendEvent modeling_deploy_mdl_failed {
mdl: {
schema: 'public',
catalog: 'wrenai',
dataSource: 'DUCKDB',
models: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
relationships: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
views: [],
enumDefinitions: undefined
},
error: 'Wren AI Error: deployment hash:f91a37d52b86f0e302421d752955d7a41f7509d1, [object Object]'
} AI false
�[36m[2025-08-22T13:57:04.491] [DEBUG] DeployService - �[39mDeploying model, hash: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T13:57:04.545] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploying wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1, deployId: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T13:57:04.564] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T13:57:05.667] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T13:57:07.673] [DEBUG] WrenAIAdaptor - �[39mGot error in API /v1/semantics-preparations/f91a37d52b86f0e302421d752955d7a41f7509d1/status: [object Object]
�[36m[2025-08-22T13:57:07.673] [DEBUG] WrenAIAdaptor - �[39mGot error when deploying to wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1. Error: [object Object]
sendEvent modeling_deploy_mdl_failed {
mdl: {
schema: 'public',
catalog: 'wrenai',
dataSource: 'DUCKDB',
models: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
relationships: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
views: [],
enumDefinitions: undefined
},
error: 'Wren AI Error: deployment hash:f91a37d52b86f0e302421d752955d7a41f7509d1, [object Object]'
} AI false
�[36m[2025-08-22T13:59:37.610] [DEBUG] DeployService - �[39mDeploying model, hash: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T13:59:37.630] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploying wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1, deployId: f91a37d52b86f0e302421d752955d7a41f7509d1
�[36m[2025-08-22T13:59:37.645] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T13:59:38.970] [DEBUG] WrenAIAdaptor - �[39mWren AI: Deploy status: INDEXING
�[36m[2025-08-22T13:59:40.979] [DEBUG] WrenAIAdaptor - �[39mGot error in API /v1/semantics-preparations/f91a37d52b86f0e302421d752955d7a41f7509d1/status: [object Object]
�[36m[2025-08-22T13:59:40.979] [DEBUG] WrenAIAdaptor - �[39mGot error when deploying to wren AI, hash: f91a37d52b86f0e302421d752955d7a41f7509d1. Error: [object Object]
sendEvent modeling_deploy_mdl_failed {
mdl: {
schema: 'public',
catalog: 'wrenai',
dataSource: 'DUCKDB',
models: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
relationships: [
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object], [Object],
[Object]
],
views: [],
enumDefinitions: undefined
},
error: 'Wren AI Error: deployment hash:f91a37d52b86f0e302421d752955d7a41f7509d1, [object Object]'
} AI false