Utilize powerful developer resources, endpoints, and documentation.
Welcome to the Staticaliza API!
The APIs below are powered by Developer's GPU, a top secret and proprietary technique to serve easily-accessable AI GPUs to perform heavy computing.
The API runs on seriously large and powerful models, which require significant GPU power (80+ TPS on our 106B LLMs and super fast high quality image generation models that requires over 48GB VRAM), that is why we are using DevGPU, our proprietary compute framework. So, please don't abuse it!
Regards to the AI Controversy: I'd rather not be called out for AI this AI that; this is for experimental showcase purposes, not to offend or replace anybody.
Below is the playground section, which will allow you to try out the text-generation API and image-generation API with no API key required.
Human Verification Required: Complete the verification below to access most API features.
This playground is coming soon.
Here are a list of API tasks, more information about each task can be found in the endpoints section below:
["text-generation", "image-generation", "image-generation-large"]
To call the API, a HTTPS POST request must be made, here is a example using the requests library in Python:
import requests
url = "https://api.staticaliza.com/v1/(TASK_NAME)" # Replace (TASK_NAME) with the task name, this also supports adding ?authorization=(API_KEY) to pass the API key instead of using the headers
headers = {"Authorization": "Bearer (API_KEY)", "Content-Type": "application/json"} # Authorization is required either in the headers or in the URL, Content-Type is optional on most systems
json = {(JSON_BODY)} # The JSON body to send to the API, this is task specific
response = requests.post(url, headers, json)
print(response.json()["data"]["output"])
Media outputs such as image, audio, or video will be in plain base64 string form, which has to be converted into a file.
Get started with the API by creating an account or logging into your existing account in the API dashboard below.
Access your API key and manage your API usage.
Text Generation (106B) (Over 80 Tokens Per Second)
https://api.staticaliza.com/v1/text-generation
body = { # These configurations uses vLLM's generator
"input": [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "Hello."},
{"role": "assistant", "content": "Hi, how can I help you?"},
{"role": "user", "content": "What does 69 mean?"}
], # This is the input text or chat object to predict from (supports both chat and completion inputs by providing either an OpenAI-styled list or a string input)
"input_prefix": "", # Optional prefix to prepend to input text (useful for chat formatting and can prepend the assistant's turn if using non-completion mode)
"max_new_tokens": 256, # The maximum number of tokens to generate (keep under 256 to prevent abuse)
"temperature": 1, # This number controls randomness (higher means more diverse, lower means more deterministic) (providing this parameter will enable sampling and disable greedy decoding) (0 to ∞)
"top_p": 1, # Controls nucleus sampling by filtering tokens whose cumulative probability exceeds this threshold (providing this parameter will enable sampling and disable greedy decoding) (0 to 1)
"top_k": 50, # This integer limits sampling to top-K most likely tokens (providing this parameter will enable sampling and disable greedy decoding) (0 to ∞, or -1 to disable it)
"min_p": 0, # This number controls the minimum probability cutoff, and filters out tokens below threshold (providing this parameter will enable sampling and disable greedy decoding) (0 to 1)
"presence_penalty": 0, # This number discourages repeating concepts already mentioned (-2 to 2)
"frequency_penalty": 0, # This number penalizes frequent token repetition, lowering their likelihood (-2 to 2)
"repetition_penalty": 1, # This number adjusts the probability of repeated tokens (discourages when over 1, encourages when under 1) (0 to ∞)
"logit_bias": {}, # This object maps token IDs to bias values that adjust their probability (use "use_tokenizer" to view input tokens) (-100 to 100)
"stop_sequences": [], # This list of strings will stop generation when any of these patterns are encountered (adds special tokens in non-completion mode)
"seed": 42, # This will create different variations of the generated text (do not provide it to use random seed)
"use_reasoning": False, # When enabled, will use reasoning tokens for a higher quality output, but can generate extra tokens
"use_tokenizer": False, # When enabled, disables generation and converts inputs into token IDs for the modelmodel
"use_json": False, # Whether to return text as JSON with auto-correction (requires instructing the model to generate JSON)
"json_prefix": None, # Whether to prepend a given string before the JSON check (no prepending before JSON check by default, if True, then use "input_prefix")
"stream": False # When enabled, enables streaming output (requires setting up event streamer)
}
Image Generation (48GB VRAM) (~1-2 Steps Per Second)
https://api.staticaliza.com/v1/image-generation
body = {
"input": "city, lush, green, solar-punk, futuristic, realistic, cinematic", # This is the input string which gives the prompt to generate
"negative_input": "bad quality, worst quality, drawing, old, ugly", # Optional prompt to ignore certain features (omit to disable negative prompting)
"reference_image": "BASE64_IMAGE_STRING", # Enables image editing mode (supports single base64 image string or array of base64 image strings for more control, adding more images will slow down the generation)
"target_image": "BASE64_IMAGE_STRING", # When provided, uses this image instead of generating new one (can be used to determine the prompt or image safety or use the background removal feature)
"model": "Default", # Style to use for generation: "Default", "Realistic", "Anime", "Pixel" (omit to use "Default" with recommended parameters)
"resolution": [1024, 1024], # Canvas width and height for generation (omit to use optimal configuration)
"steps": 25, # Number of generation iterations (omit to use optimal configuration; image-generation-large must use 8)
"guidance": 7, # How much generator adheres to prompt (omit to use optimal configuration; image-generation-large must use 1)
"post_resolution": [1024, 1024], # Final output resolution (omit to use "resolution" parameter)
"crop": False, # Whether to crop image to match "post_resolution" from center (otherwise stretches image)
"remove_background": False, # Whether to attempt background removal (disables lossy compression and may add generation delay)
"safety_check": False, # Whether to check prompt and image for safety (may add generation delay)
"lossy": True, # Whether to return image as JPG base64 string (omit to return PNG base64 string)
"progressive": False, # Whether to return image as progressive JPG base64 string ("lossy" must be enabled)
"use_json": False, # Whether to return image as 1D array with [r, g, b, a] values from 0-255
"use_2d_json": False, # Whether to return image as 2D array with [r, g, b, a] values from 0-255
"seed": 42 # This will create different variations of the generated image (do not provide it to use random seed)
}