Skip to content

atoma-network/atoma-sdk-python

Repository files navigation

Atoma's Python SDK

Logo

[Discord] Twitter Documentation License

Summary

Table of Contents

SDK Installation

Tip

To finish publishing your SDK to PyPI you must run your first generation action.

The SDK can be installed with either pip or poetry package managers.

PIP

PIP is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.

pip install git+https://github.com/atoma-network/atoma-sdk-python.git

Poetry

Poetry is a modern tool that simplifies dependency management and package publishing by using a single pyproject.toml file to handle project metadata and dependencies.

poetry add git+https://github.com/atoma-network/atoma-sdk-python.git

IDE Support

PyCharm

Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.

SDK Example Usage

Example

# Synchronous Example
from atoma_sdk import AtomaSDK
import os

with AtomaSDK(
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234")

    # Handle response
    print(res)

The same SDK client can also be used to make asychronous requests by importing asyncio.

# Asynchronous Example
import asyncio
from atoma_sdk import AtomaSDK
import os

async def main():
    async with AtomaSDK(
        bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
    ) as atoma_sdk:

        res = await atoma_sdk.chat.create_async(messages=[
            {
                "content": "Hello! How can you help me today?",
                "role": "user",
            },
        ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
            "json([\"stop\", \"halt\"])",
        ], temperature=0.7, top_p=1, user="user-1234")

        # Handle response
        print(res)

asyncio.run(main())

Authentication

Per-Client Security Schemes

This SDK supports the following security scheme globally:

Name Type Scheme Environment Variable
bearer_auth http HTTP Bearer ATOMASDK_BEARER_AUTH

To authenticate with the API the bearer_auth parameter must be set when initializing the SDK client instance. For example:

from atoma_sdk import AtomaSDK
import os

with AtomaSDK(
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234")

    # Handle response
    print(res)

Available Resources and Operations

Available methods
  • create - Create confidential embeddings

Server-sent event streaming

Server-sent events are used to stream content from certain operations. These operations will expose the stream as Generator that can be consumed using a simple for loop. The loop will terminate when the server no longer has any events to send and closes the underlying connection.

The stream is also a Context Manager and can be used with the with statement and will close the underlying connection when the context is exited.

from atoma_sdk import AtomaSDK
import os

with AtomaSDK(
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create_stream(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
            "name": "john_doe",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234")

    with res as event_stream:
        for event in event_stream:
            # handle event
            print(event, flush=True)

Retries

Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.

To change the default retry strategy for a single API call, simply provide a RetryConfig object to the call:

from atoma_sdk import AtomaSDK
from atoma_sdk.utils import BackoffStrategy, RetryConfig
import os

with AtomaSDK(
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234",
        RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))

    # Handle response
    print(res)

If you'd like to override the default retry strategy for all operations that support retries, you can use the retry_config optional parameter when initializing the SDK:

from atoma_sdk import AtomaSDK
from atoma_sdk.utils import BackoffStrategy, RetryConfig
import os

with AtomaSDK(
    retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234")

    # Handle response
    print(res)

Error Handling

Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.

By default, an API error will raise a models.APIError exception, which has the following properties:

Property Type Description
.status_code int The HTTP status code
.message str The error message
.raw_response httpx.Response The raw HTTP response
.body str The response content

When custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective Errors tables in SDK docs for more details on possible exception types for each operation. For example, the create_async method may raise the following exceptions:

Error Type Status Code Content Type
models.APIError 4XX, 5XX */*

Example

from atoma_sdk import AtomaSDK, models
import os

with AtomaSDK(
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
    res = None
    try:

        res = atoma_sdk.chat.create(messages=[
            {
                "content": "Hello! How can you help me today?",
                "role": "user",
            },
        ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
            "json([\"stop\", \"halt\"])",
        ], temperature=0.7, top_p=1, user="user-1234")

        # Handle response
        print(res)

    except models.APIError as e:
        # handle exception
        raise(e)

Server Selection

Override Server URL Per-Client

The default server can also be overridden globally by passing a URL to the server_url: str optional parameter when initializing the SDK client instance. For example:

from atoma_sdk import AtomaSDK
import os

with AtomaSDK(
    server_url="https://api.atoma.network",
    bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:

    res = atoma_sdk.chat.create(messages=[
        {
            "content": "Hello! How can you help me today?",
            "role": "user",
        },
    ], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
        "json([\"stop\", \"halt\"])",
    ], temperature=0.7, top_p=1, user="user-1234")

    # Handle response
    print(res)

Custom HTTP Client

The Python SDK makes API calls using the httpx HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance. Depending on whether you are using the sync or async version of the SDK, you can pass an instance of HttpClient or AsyncHttpClient respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls. This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of httpx.Client or httpx.AsyncClient directly.

For example, you could specify a header for every request that this sdk makes as follows:

from atoma_sdk import AtomaSDK
import httpx

http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = AtomaSDK(client=http_client)

or you could wrap the client with your own custom logic:

from atoma_sdk import AtomaSDK
from atoma_sdk.httpclient import AsyncHttpClient
import httpx

class CustomClient(AsyncHttpClient):
    client: AsyncHttpClient

    def __init__(self, client: AsyncHttpClient):
        self.client = client

    async def send(
        self,
        request: httpx.Request,
        *,
        stream: bool = False,
        auth: Union[
            httpx._types.AuthTypes, httpx._client.UseClientDefault, None
        ] = httpx.USE_CLIENT_DEFAULT,
        follow_redirects: Union[
            bool, httpx._client.UseClientDefault
        ] = httpx.USE_CLIENT_DEFAULT,
    ) -> httpx.Response:
        request.headers["Client-Level-Header"] = "added by client"

        return await self.client.send(
            request, stream=stream, auth=auth, follow_redirects=follow_redirects
        )

    def build_request(
        self,
        method: str,
        url: httpx._types.URLTypes,
        *,
        content: Optional[httpx._types.RequestContent] = None,
        data: Optional[httpx._types.RequestData] = None,
        files: Optional[httpx._types.RequestFiles] = None,
        json: Optional[Any] = None,
        params: Optional[httpx._types.QueryParamTypes] = None,
        headers: Optional[httpx._types.HeaderTypes] = None,
        cookies: Optional[httpx._types.CookieTypes] = None,
        timeout: Union[
            httpx._types.TimeoutTypes, httpx._client.UseClientDefault
        ] = httpx.USE_CLIENT_DEFAULT,
        extensions: Optional[httpx._types.RequestExtensions] = None,
    ) -> httpx.Request:
        return self.client.build_request(
            method,
            url,
            content=content,
            data=data,
            files=files,
            json=json,
            params=params,
            headers=headers,
            cookies=cookies,
            timeout=timeout,
            extensions=extensions,
        )

s = AtomaSDK(async_client=CustomClient(httpx.AsyncClient()))

Debugging

You can setup your SDK to emit debug logs for SDK requests and responses.

You can pass your own logger class directly into your SDK.

from atoma_sdk import AtomaSDK
import logging

logging.basicConfig(level=logging.DEBUG)
s = AtomaSDK(debug_logger=logging.getLogger("atoma_sdk"))

You can also enable a default debug logger by setting an environment variable ATOMASDK_DEBUG to true.

Development

Maturity

This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally looking for the latest version.

Contributions

While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation. We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.

SDK Created by Speakeasy