Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions openai/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# AceDataCloud API Token
# Get yours at https://platform.acedata.cloud
ACEDATACLOUD_API_TOKEN=

# Optional: Custom API base URL (default: https://api.acedata.cloud)
# ACEDATACLOUD_API_BASE_URL=https://api.acedata.cloud

# Optional: Request timeout in seconds (default: 60)
# OPENAI_REQUEST_TIMEOUT=60
23 changes: 23 additions & 0 deletions openai/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.1.0] - 2026-04-25

### Added

- Initial release of OpenAI CLI
- `chat` command for chat completions
- `complete` command for multi-turn chat from JSON messages
- `embed` command for creating text embeddings
- `imagine` command for image generation
- `edit-image` command for image editing
- `respond` command for the Responses API
- `models` command to list available models
- `config` command to show current configuration
- Support for GPT-5.x, GPT-4.x, o1, o3, o4-mini models
- Support for dall-e-3, gpt-image-1, gpt-image-1.5, gpt-image-2 image models
- Support for text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002
10 changes: 10 additions & 0 deletions openai/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM python:3.12-slim

WORKDIR /app

COPY pyproject.toml README.md LICENSE ./
COPY openai_cli/ openai_cli/

RUN pip install --no-cache-dir .

ENTRYPOINT ["openai-cli"]
21 changes: 21 additions & 0 deletions openai/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2024 AceDataCloud

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
127 changes: 127 additions & 0 deletions openai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# OpenAI CLI

A command-line tool for accessing OpenAI models through the [AceDataCloud](https://platform.acedata.cloud) API.

## Features

- **Chat completions** — Interact with GPT-5.x, GPT-4.x, o1, o3, and o4 models
- **Responses API** — Use the OpenAI Responses API with extended model support
- **Embeddings** — Generate text embedding vectors
- **Image generation** — Create images with DALL-E 3, GPT-Image models
- **Image editing** — Edit images using AI

## Installation

```bash
pip install openai-cli
```

## Quick Start

```bash
# Set your API token
export ACEDATACLOUD_API_TOKEN=your_token

# Chat with GPT
openai chat "What is the capital of France?"

# Use a specific model
openai chat "Explain quantum computing" -m gpt-4o

# Generate an image
openai imagine "A serene mountain landscape at sunset"

# Create embeddings
openai embed "The quick brown fox jumps over the lazy dog"

# Use the Responses API
openai respond "Summarize recent AI developments" -m o3
```

## Commands

### `openai chat`

Send a single user message and get a completion.

```bash
openai chat "Hello, how are you?" -m gpt-4o-mini
openai chat "Write a haiku" --temperature 1.5
openai chat "Be concise" -s "You are a helpful assistant"
```

### `openai complete`

Create a completion from a full JSON messages array.

```bash
openai complete '[{"role":"user","content":"Hello"}]'
openai complete '[{"role":"system","content":"Be brief"},{"role":"user","content":"Hi"}]' -m gpt-4o
```

### `openai embed`

Generate embedding vectors for text.

```bash
openai embed "The quick brown fox"
openai embed "Hello world" -m text-embedding-3-large --dimensions 256
```

### `openai imagine`

Generate images from text prompts.

```bash
openai imagine "A cat on a rooftop at night"
openai imagine "Product photo of a watch" -m gpt-image-1 --quality high
openai imagine "Abstract painting" --size 1536x1024
```

### `openai edit-image`

Edit an existing image using a text prompt.

```bash
openai edit-image "Make the background white" -i https://example.com/photo.jpg
openai edit-image "Add sunglasses to the person" -i photo.jpg -m gpt-image-1
```

### `openai respond`

Use the OpenAI Responses API.

```bash
openai respond "What is 2+2?"
openai respond "Explain AI" -m o3
```

### `openai models`

List all available models.

```bash
openai models
```

### `openai config`

Show current configuration.

```bash
openai config
```

## Configuration

| Environment Variable | Description | Default |
|---|---|---|
| `ACEDATACLOUD_API_TOKEN` | Your AceDataCloud API token | (required) |
| `ACEDATACLOUD_API_BASE_URL` | API base URL | `https://api.acedata.cloud` |
| `OPENAI_REQUEST_TIMEOUT` | Request timeout in seconds | `60` |

Get your API token at [https://platform.acedata.cloud](https://platform.acedata.cloud).

## License

MIT License. See [LICENSE](LICENSE) for details.
6 changes: 6 additions & 0 deletions openai/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
services:
openai-cli:
build: .
env_file:
- .env
entrypoint: ["openai-cli"]
1 change: 1 addition & 0 deletions openai/openai_cli/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""OpenAI CLI - OpenAI generation via AceDataCloud API."""
5 changes: 5 additions & 0 deletions openai/openai_cli/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
"""Allow running as python -m openai_cli."""

from openai_cli.main import cli

cli()
1 change: 1 addition & 0 deletions openai/openai_cli/commands/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""OpenAI CLI commands package."""
171 changes: 171 additions & 0 deletions openai/openai_cli/commands/chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
"""Chat completion commands."""

import json

import click

from openai_cli.core.client import get_client
from openai_cli.core.exceptions import OpenAIError
from openai_cli.core.output import (
CHAT_MODELS,
DEFAULT_CHAT_MODEL,
print_chat_result,
print_error,
print_json,
)


@click.command()
@click.argument("message")
@click.option(
"-m",
"--model",
type=click.Choice(CHAT_MODELS),
default=DEFAULT_CHAT_MODEL,
help="Model to use for chat completion.",
)
@click.option(
"-s",
"--system",
default=None,
help="System prompt to set assistant behavior.",
)
@click.option(
"-t",
"--temperature",
type=float,
default=None,
help="Sampling temperature between 0 and 2 (default: 1).",
)
@click.option(
"--max-tokens",
type=int,
default=None,
help="Maximum number of tokens to generate.",
)
@click.option(
"-n",
"--number",
type=int,
default=None,
help="How many completion choices to generate (default: 1).",
)
@click.option("--json", "output_json", is_flag=True, help="Output raw JSON.")
@click.pass_context
def chat(
ctx: click.Context,
message: str,
model: str,
system: str | None,
temperature: float | None,
max_tokens: int | None,
number: int | None,
output_json: bool,
) -> None:
"""Send a chat message and get a completion.

MESSAGE is the user message to send.

\b
Examples:
openai chat "What is the capital of France?"
openai chat "Explain quantum computing" -m gpt-4o
openai chat "Write a haiku" -m gpt-5 --temperature 1.5
openai chat "Summarize this" -s "You are a concise summarizer"
"""
client = get_client(ctx.obj.get("token"))
try:
messages: list[dict[str, str]] = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": message})

payload: dict[str, object] = {
"model": model,
"messages": messages,
"temperature": temperature,
"max_tokens": max_tokens,
"n": number,
}

result = client.chat(**payload) # type: ignore[arg-type]
if output_json:
print_json(result)
else:
print_chat_result(result)
except OpenAIError as e:
print_error(e.message)
raise SystemExit(1) from e


@click.command()
@click.argument("messages_json")
@click.option(
"-m",
"--model",
type=click.Choice(CHAT_MODELS),
default=DEFAULT_CHAT_MODEL,
help="Model to use for chat completion.",
)
@click.option(
"-t",
"--temperature",
type=float,
default=None,
help="Sampling temperature between 0 and 2 (default: 1).",
)
@click.option(
"--max-tokens",
type=int,
default=None,
help="Maximum number of tokens to generate.",
)
@click.option(
"-n",
"--number",
type=int,
default=None,
help="How many completion choices to generate (default: 1).",
)
@click.option("--json", "output_json", is_flag=True, help="Output raw JSON.")
@click.pass_context
def complete(
ctx: click.Context,
messages_json: str,
model: str,
temperature: float | None,
max_tokens: int | None,
number: int | None,
output_json: bool,
) -> None:
"""Create a chat completion from a JSON messages array.

MESSAGES_JSON is a JSON array of message objects with 'role' and 'content'.

\b
Examples:
openai complete '[{"role":"user","content":"Hello"}]'
openai complete '[{"role":"system","content":"Be concise"},{"role":"user","content":"Hi"}]' -m gpt-4o
"""
client = get_client(ctx.obj.get("token"))
try:
messages = json.loads(messages_json)
payload: dict[str, object] = {
"model": model,
"messages": messages,
"temperature": temperature,
"max_tokens": max_tokens,
"n": number,
}

result = client.chat(**payload) # type: ignore[arg-type]
if output_json:
print_json(result)
else:
print_chat_result(result)
except (json.JSONDecodeError, ValueError) as e:
print_error(f"Invalid messages JSON: {e}")
raise SystemExit(1) from e
except OpenAIError as e:
print_error(e.message)
raise SystemExit(1) from e
Loading