(confidential_chat)
Atoma's API confidential chat completions v1 endpoint
- create - Create confidential chat completion
- create_stream
This handler processes chat completion requests in a confidential manner, providing additional encryption and security measures for sensitive data processing. It supports both streaming and non-streaming responses while maintaining data confidentiality through AEAD encryption and TEE hardware, for full private AI compute.
Returns a Result
containing either:
- An HTTP response with the chat completion result
- A streaming SSE connection for real-time completions
- An
AtomaProxyError
error if the request processing fails
Returns AtomaProxyError::InvalidBody
if:
- The 'stream' field is missing or invalid in the payload
Returns AtomaProxyError::InternalError
if:
- The inference service request fails
- Response processing encounters errors
- State manager updates fail
- Utilizes AEAD encryption for request/response data
- Supports TEE (Trusted Execution Environment) processing
- Implements secure key exchange using X25519
- Maintains confidentiality throughout the request lifecycle
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.confidential_chat.create(ciphertext="<value>", client_dh_public_key="<value>", model_name="<value>", node_dh_public_key="<value>", nonce="<value>", plaintext_body_hash="<value>", salt="<value>", stack_small_id=486589)
# Handle response
print(res)
Parameter | Type | Required | Description |
---|---|---|---|
ciphertext |
str | ✔️ | The encrypted payload that needs to be processed (base64 encoded) |
client_dh_public_key |
str | ✔️ | Client's public key for Diffie-Hellman key exchange (base64 encoded) |
model_name |
str | ✔️ | Model name |
node_dh_public_key |
str | ✔️ | Node's public key for Diffie-Hellman key exchange (base64 encoded) |
nonce |
str | ✔️ | Cryptographic nonce used for encryption (base64 encoded) |
plaintext_body_hash |
str | ✔️ | Hash of the original plaintext body for integrity verification (base64 encoded) |
salt |
str | ✔️ | Salt value used in key derivation (base64 encoded) |
stack_small_id |
int | ✔️ | Unique identifier for the small stack being used |
num_compute_units |
OptionalNullable[int] | ➖ | Number of compute units to be used for the request, for image generations, as this value is known in advance (the number of pixels to generate) |
stream |
OptionalNullable[bool] | ➖ | Indicates whether this is a streaming request |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. |
models.ConfidentialComputeResponse
Error Type | Status Code | Content Type |
---|---|---|
models.APIError | 4XX, 5XX | */* |
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.confidential_chat.create_stream(ciphertext="<value>", client_dh_public_key="<value>", model_name="<value>", node_dh_public_key="<value>", nonce="<value>", plaintext_body_hash="<value>", salt="<value>", stack_small_id=180107)
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)
Parameter | Type | Required | Description |
---|---|---|---|
ciphertext |
str | ✔️ | The encrypted payload that needs to be processed (base64 encoded) |
client_dh_public_key |
str | ✔️ | Client's public key for Diffie-Hellman key exchange (base64 encoded) |
model_name |
str | ✔️ | Model name |
node_dh_public_key |
str | ✔️ | Node's public key for Diffie-Hellman key exchange (base64 encoded) |
nonce |
str | ✔️ | Cryptographic nonce used for encryption (base64 encoded) |
plaintext_body_hash |
str | ✔️ | Hash of the original plaintext body for integrity verification (base64 encoded) |
salt |
str | ✔️ | Salt value used in key derivation (base64 encoded) |
stack_small_id |
int | ✔️ | Unique identifier for the small stack being used |
num_compute_units |
OptionalNullable[int] | ➖ | Number of compute units to be used for the request, for image generations, as this value is known in advance (the number of pixels to generate) |
stream |
OptionalNullable[bool] | ➖ | Indicates whether this is a streaming request |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. |
Error Type | Status Code | Content Type |
---|---|---|
models.APIError | 4XX, 5XX | */* |