DocsPromptQL APIPromptQL Natural Language API

PromptQL Natural Language API

The Natural Language Query API allows you to interact with PromptQL directly, sending messages and receiving responses with support for streaming. This API is particularly useful for building interactive applications that need to communicate with PromptQL in real-time.

API Reference

Query Endpoint

Send messages to PromptQL and receive responses, optionally in a streaming format.

Request

POST https://api.promptql.pro.hasura.io/query

Headers

Content-Type: application/json

Request Body

{
  "version": "v1",
  "promptql_api_key": "<promptql api key created from project settings>",
  "llm": {
    "provider": "hasura",
  },
  "ddn": {
    "url": "<project sql endpoint url>",
    "headers": {}
  },
  "artifacts": [],
  "system_instructions": "Optional system instructions for the LLM",
  "timezone": "America/Los_Angeles",
  "interactions": [
    {
      "user_message": {
        "text": "Your message here"
      },
      "assistant_actions": [
        {
          "message": "Previous assistant message",
          "plan": "Previous execution plan",
          "code": "Previously executed code",
          "code_output": "Previous code output",
          "code_error": "Previous error message if any"
        }
      ]
    }
  ],
  "stream": false
}

Request Body Fields

FieldTypeRequiredDescription
versionstringYesMust be set to “v1”
promptql_api_keystringYesPromptQL API key created from project settings
llmobjectNoConfiguration for the main LLM provider
ai_primitives_llmobjectNoOptional Configuration for the AI primitives LLM provider. If this is missing, the main LLM provider is used
ddnobjectYesDDN configuration including URL and headers
artifactsarrayNoList of existing artifacts to provide context
system_instructionsstringNoCustom system instructions for the LLM
timezonestringYesIANA timezone for interpreting time-based queries
interactionsarrayYesList of message interactions, including user messages and assistant responses. Each interaction contains a user_message object with the user’s text and an optional assistant_actions array containing previous assistant responses with their messages, plans, code executions, and outputs.
streambooleanYesWhether to return a streaming response

LLM Provider Options

The llm and ai_primitives_llm fields support the following providers:

  1. Hasura:
{
  "provider": "hasura"
}

Note: You get $10 worth of free credits with Hasura’s built-in provider when you get started.

  1. Anthropic:
{
  "provider": "anthropic",
  "api_key": "<your anthropic api key>"
}
  1. OpenAI:
{
  "provider": "openai",
  "api_key": "<your openai api key>"
}

Response

The response format depends on the stream parameter:

Non-streaming Response (application/json)

{
  "assistant_actions": [
    {
      "message": "Response message",
      "plan": "Execution plan",
      "code": "Executed code",
      "code_output": "Code output",
      "code_error": "Error message if any"
    }
  ],
  "modified_artifacts": [
    {
      "identifier": "artifact_id",
      "title": "Artifact Title",
      "artifact_type": "text|table",
      "data": "artifact_data"
    }
  ]
}

Streaming Response (text/event-stream)

The streaming response sends chunks of data in Server-Sent Events (SSE) format:

data: { "message": "Chunk of response message", "plan": null, "code": null, "code_output": null, "code_error": null, "type": "assistant_action_chunk", "index": 0}
data: { "type": "artifact_update_chunk", "artifact": { "identifier": "artifact_id", "title": "Artifact Title", "artifact_type": "text|table", "data": "artifact_data" }}

Response Fields

FieldTypeDescription
assistant_actionsarrayList of actions taken by the assistant
modified_artifactsarrayList of artifacts created or modified during the interaction

Assistant Action Fields

FieldTypeDescription
messagestringText response from the assistant
planstringExecution plan if any
codestringCode that was executed
code_outputstringOutput from code execution
code_errorstringError message if code execution failed

Error Response

{
  "type": "error_chunk",
  "error": "Error message describing what went wrong"
}

Notes for Using the Query API

  1. Streaming vs Non-streaming

    • Use streaming (stream: true) for real-time interactive applications
    • Use non-streaming (stream: false) for batch processing or when you need the complete response at once
  2. Artifacts

    • You can pass existing artifacts to provide context for the conversation
    • Both text and table artifacts are supported
    • Artifacts can be modified or created during the interaction
  3. Timezone Handling

    • Always provide a timezone to ensure consistent handling of time-based queries
    • Use standard IANA timezone formats (e.g., “America/Los_Angeles”, “Europe/London”)
  4. Error Handling

    • Always check for error responses, especially in streaming mode
    • Implement appropriate retry logic for transient failures