Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with ollama branch #2

Open
josekampos opened this issue Jan 14, 2025 · 2 comments
Open

Issue with ollama branch #2

josekampos opened this issue Jan 14, 2025 · 2 comments

Comments

@josekampos
Copy link

I'm tryin to use the ollama client, the the loaded web interface is always requesting me the "OpenAI API Key". Here is my api startup log :

WARN[0000] /opt/netbox_react_agent/docker-compose.yml: the attribute version is obsolete, it will be ignored, please remove it to avoid potential confusion
[+] Running 2/0
✔ Container netbox_react_agent-ollama-1 Created 0.0s
✔ Container netbox_react_agent Created 0.0s
Attaching to netbox_react_agent, ollama-1
ollama-1 | 2025/01/14 17:28:02 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama-1 | time=2025-01-14T17:28:02.403Z level=INFO source=images.go:432 msg="total blobs: 0"
ollama-1 | time=2025-01-14T17:28:02.403Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
ollama-1 | time=2025-01-14T17:28:02.404Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)"
ollama-1 | time=2025-01-14T17:28:02.405Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx cpu]"
ollama-1 | time=2025-01-14T17:28:02.405Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
ollama-1 | [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
ollama-1 |
ollama-1 | [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
ollama-1 | - using env: export GIN_MODE=release
ollama-1 | - using code: gin.SetMode(gin.ReleaseMode)
ollama-1 |
ollama-1 | [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
ollama-1 | [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
ollama-1 | [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
ollama-1 | [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
ollama-1 | [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
ollama-1 | [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
ollama-1 | [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
ollama-1 | [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
ollama-1 | [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
ollama-1 | [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
ollama-1 | [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
ollama-1 | [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
ollama-1 | [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
ollama-1 | [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
ollama-1 | [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
ollama-1 | time=2025-01-14T17:28:02.430Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
ollama-1 | time=2025-01-14T17:28:02.430Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="15.6 GiB" available="14.1 GiB"
netbox_react_agent |
netbox_react_agent | Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.
netbox_react_agent |
netbox_react_agent |
netbox_react_agent | You can now view your Streamlit app in your browser.
netbox_react_agent |
netbox_react_agent | Local URL: http://localhost:8501
netbox_react_agent | Network URL: http://192.168.32.3:8501
netbox_react_agent | External URL: http://185.165.107.254:8501
netbox_react_agent |

Thanks in advance

Jose

@Timewarrener
Copy link

by any chance you running this on a mac?

@josekampos
Copy link
Author

by any chance you running this on a mac?

Hi. No, i'm trying it on a linux box without GPU support, so i edited the docker file :

"docker-compose.yml"

version: '3.6'

networks:
ollama:

services:
ollama:
image: ollama/ollama:latest
build:
context: ./
dockerfile: ./ollama/Dockerfile
networks:
- ollama
volumes:
- ./data/ollama:/root/.ollama
ports:
- 11434:1143

environment:
  - OLLAMA_LLM_LIBRARY=cpu
  - CUDA_VISIBLE_DEVICES=
  - HIP_VISIBLE_DEVICES=
  - ROCR_VISIBLE_DEVICES=
  - GPU_DEVICE_ORDINAL=

netbox_react_agent:
image: johncapobianco/netbox_react_agent:netbox_react_agent
container_name: netbox_react_agent
restart: always
build:
context: ./
dockerfile: ./docker/Dockerfile
ports:
- "8501:8501"
depends_on:
- ollama
networks:
- ollama
environment:
- OLLAMA_URL=http://ollama:11434

no luck, my web interface always required the OPENAI API key.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants