A feather weight mock data generator API
Extremely simple python API to generate mock data for your app using chatgpt openai model.
- Mock Data generation for especific fields and context
- Data caching and saving to file
Use poetry to create a virtual environment and install the dependencies.
Move into /mockary/mockary
folder and run:
poetry install
Mockary is very simple to use. It only takes around 5 minutes to get it up and running (including the time of reading this guide).
Modify the .env.example
with the OpenAI Api Key and rename it to .env
.
Inside the /mockary/mockary
directory run:
poetry run uvicorn main:app
There are some examples in the /mockary/mockary/config.ini
file. To add a new one use this base template:
[MockName]
fields=field1,field2,field3
message="Message to give the AI model context about the data"
cache=true
curl -X 'GET' \
'http://localhost:8000/[MockName]?samples=[number of samples]'
The result will be a JSON with the generated data.
{
"data": [
{
... generated data ...
},
]
}
Global Settings
Option | Description |
---|---|
cache |
Can be used in CONFIG or individual mock. If true, the generated data will cached in memory. |
max_tokens |
Maximum tokens to generate. Default is the maximum - 4096. |
model |
Especific model to use. Default is gpt-3.5-turbo . To use other models use the name as described in the openai documentation |
temperature |
Temperature to use in the model. Default is 1. |
Mock Settings
Option | Description |
---|---|
save |
Save the generated data to a file. |
save_path |
Path to save the generated data. The default is ./mock_name |
fields |
Fields to generate data. |
message |
Message to give the AI model context about the data. |