Delivr.ai Audience API
Build and export custom audience segments from your visitor data. Use the Audiences API to extract high-intent visitors for activation in ad platforms, CRMs, or data warehouses.
What You Can Do
| Use Case | Description |
|---|---|
| Build Segments | Filter visitors by intent topics, scores, and time ranges |
| Count Before Export | Preview segment sizes before committing to export |
| Export to Cloud Storage | Output data to S3 or GCS for activation |
When to Use This API
Use the Audiences API when you need to:
- Export visitor segments to ad platforms (e.g., for retargeting)
- Build custom audiences based on intent signals
- Extract data for analysis in your data warehouse
- Create segments filtered by topic, score, or date
Getting Started
Before using the Audiences API, you need authentication credentials.
Step 1. Register
Create an account at app-v2.delivr.ai/sign-up.
curl --request POST \
--url https://apiv2.delivr.ai/auth/v1/register \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data '{
"email": "[email protected]",
"password": "your-password",
"first_name": "Your",
"last_name": "Name",
"organization_name": "Your Company"
}'Step 2. Login
Get your JWT token:
curl --request POST \
--url https://apiv2.delivr.ai/auth/v1/login \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data '{
"email": "[email protected]",
"password": "your-password"
}'Response:
{
"user_id": "221708ad-7dad-49e0-...",
"organization_id": "66fede73-6811-...",
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
}Step 3. Create API Client
Create an API key for the Audiences API:
curl --request POST \
--url https://apiv2.delivr.ai/client/v1 \
--header 'accept: application/json' \
--header 'authorization: Bearer YOUR_JWT_TOKEN' \
--header 'content-type: application/json' \
--data '{
"name": "Audiences API Client"
}'Response:
{
"client_id": "69ee5598-b550-...",
"api_key": "dlvr_f0997864bf3db3bc99d68864e72...",
"api_secret": "b0b1b44a54c8fe77bda7634e9e86da00...",
"created_at": "2025-11-06 14:49:50"
}Save your api_key and api_secret securely. These cannot be retrieved again.
Key Concepts
Task
A Task is a high-level unit of work that represents your audience export request.
| Field | Description |
|---|---|
task_id | Unique identifier for tracking |
status | Current state: in_progress, counted, unloading, completed |
total_jobs / completed_jobs | Progress tracking |
total_count | Total rows in your segment |
counts | Per-partition row counts |
outputs / output_links | Links to exported data files |
Job
A Job is the internal unit of work. Jobs are created automatically when you create a Task. You typically don't interact with Jobs directly - you see aggregated progress in the Task.
Job types:
Count- Calculates segment sizeUnload- Exports data to storage
API Endpoints
| Endpoint | Method | Purpose |
|---|---|---|
/api/v1/intent_hive_tasks | POST | Create task for intent data |
/api/v1/generic_parquet_tasks | POST | Create task for Parquet files |
/api/v1/status | GET | List all tasks |
/api/v1/status/{task_id} | GET | Get task details |
/api/v1/tasks/{task_id}/unload | POST | Trigger export for counted task |
Workflow Options
Option A: Count and Export Automatically
Use when you want to immediately export all matching data.
POST /api/v1/intent_hive_tasks
{
"days": ["2024-05-01", "2024-05-02"],
"scores": ["high", "medium"],
"topic_id_prefixes": ["4eyes_103"],
"perform_unload": true
}Flow:
- Task created with status
in_progress - Count jobs run to calculate segment size
- Unload jobs automatically export data
- Task reaches
completedwithoutput_links
Option B: Count First, Then Decide
Use when you want to preview segment size before exporting.
POST /api/v1/intent_hive_tasks
{
"days": ["2024-05-01"],
"scores": ["high"],
"topic_id_prefixes": ["4eyes_103"],
"perform_unload": false
}Flow:
- Task created with status
in_progress - Count jobs complete, status becomes
counted - Check
total_count- is segment size acceptable? - If yes, trigger export:
POST /api/v1/tasks/{task_id}/unload - Task reaches
completedwithoutput_links
Creating an Intent Hive Task
Export visitor segments filtered by intent topics, scores, and dates.
Endpoint: POST /api/v1/intent_hive_tasks
Required Parameters
| Parameter | Type | Description |
|---|---|---|
days | array | Dates to include: ["2024-05-01", "20240502"] |
topic_id_prefixes | array | Intent topics: ["4eyes_103", "4eyes_107"] |
scores | array | Score buckets: ["high", "medium"] |
Optional Parameters
| Parameter | Type | Description |
|---|---|---|
input_path | string | Base path to data (default: Delivr storage) |
limit | integer | Max rows per job (skips counting if set) |
perform_unload | boolean | Auto-export after counting (default: true) |
select | string | Columns to include: "person_id, score" |
where_expr | string | SQL filter: "score >= 'medium'" |
Example Request
curl --request POST \
--url https://apiv2.delivr.ai/api/v1/intent_hive_tasks \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"days": ["2024-05-01"],
"scores": ["high"],
"topic_id_prefixes": ["4eyes_103"]
}'Response:
{
"task_id": "abc123...",
"status": "in_progress",
"total_jobs": 3,
"completed_jobs": 0
}Creating a Generic Parquet Task
Export from arbitrary Parquet file trees (S3, GCS, local storage).
Endpoint: POST /api/v1/generic_parquet_tasks
Required Parameters
| Parameter | Type | Description |
|---|---|---|
input_path | string | Root path: s3://bucket/dataset |
Optional Parameters
| Parameter | Type | Description |
|---|---|---|
limit | integer | Max rows per job (skips counting if set) |
perform_unload | boolean | Auto-export after counting (default: true) |
select | string | Columns to include: "company_id, job_title" |
where_expr | string | SQL filter: "score = 'high'" |
Example Request
curl --request POST \
--url https://apiv2.delivr.ai/api/v1/generic_parquet_tasks \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"input_path": "s3://your-bucket/your-dataset"
}'Checking Task Status
List All Tasks
curl --request GET \
--url https://apiv2.delivr.ai/api/v1/status \
--header 'X-API-Key: YOUR_API_KEY'Get Task Details
curl --request GET \
--url https://apiv2.delivr.ai/api/v1/status/YOUR_TASK_ID \
--header 'X-API-Key: YOUR_API_KEY'Response:
{
"task_id": "abc123...",
"status": "completed",
"total_jobs": 3,
"completed_jobs": 3,
"total_count": 15420,
"counts": {
"day=2024-05-01/prefix=4eyes_103/score=high": 15420
},
"outputs": {
"day=2024-05-01/prefix=4eyes_103/score=high": "gs://output-bucket/..."
},
"created_at": "2024-05-01T10:00:00Z",
"completed_at": "2024-05-01T10:05:00Z"
}Triggering Export Manually
If you created a task with perform_unload: false, trigger export after reviewing counts:
curl --request POST \
--url https://apiv2.delivr.ai/api/v1/tasks/YOUR_TASK_ID/unload \
--header 'X-API-Key: YOUR_API_KEY'This only works when task status is "counted". If still "in_progress", wait for counting to complete.
Integration Patterns
Polling Pattern (Recommended)
import time
import requests
# 1. Create task
response = requests.post(
"https://apiv2.delivr.ai/api/v1/intent_hive_tasks",
headers={"X-API-Key": API_KEY},
json={"days": ["2024-05-01"], "scores": ["high"], "topic_id_prefixes": ["4eyes_103"]}
)
task_id = response.json()["task_id"]
# 2. Poll for completion
while True:
status = requests.get(
f"https://apiv2.delivr.ai/api/v1/status/{task_id}",
headers={"X-API-Key": API_KEY}
).json()
if status["status"] == "completed":
print(f"Done! {status['total_count']} rows exported")
print(f"Output: {status['output_links']}")
break
elif status["status"] in ["in_progress", "unloading"]:
time.sleep(30) # Check every 30 seconds
else:
print(f"Unexpected status: {status['status']}")
breakError Reference
| Error | Cause | Solution |
|---|---|---|
| 404 Task not found | Invalid task_id or task expired | Check task_id, create new task |
| 409 Cannot unload | Task not in "counted" status | Wait for counting to complete |
| 500 Internal error | Redis/backend issue | Retry after a few seconds |
Next Steps
- Query Events API - Get raw event data
- Resolution as a Service - Set up visitor tracking
Updated 1 day ago