How we generate images via the Gemini API, ported from the getcirclesorg repo.
Setup
Install the SDK:
pnpm add -D @google/genai
Set the environment variable:
GOOGLE_GENAI_API_KEY=<your-key>
Basic usage
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GOOGLE_GENAI_API_KEY });
const response = await ai.models.generateContent({
model: "gemini-3-pro-image-preview",
contents: "your prompt here",
config: {
responseModalities: ["Image"],
imageConfig: {
aspectRatio: "16:9",
},
},
});
// Extract the image from the response
const part = response.candidates[0].content.parts.find((p) => p.inlineData);
const imageBuffer = Buffer.from(part.inlineData.data, "base64");
Available models
| Model | Notes |
|---|---|
gemini-3-pro-image-preview | Best quality, used in getcirclesorg |
gemini-3.1-flash-image-preview | Faster, supports extra aspect ratios and 512px size |
gemini-2.5-flash-image | Alternative |
Image config options
aspectRatio
"1:1", "1:4", "1:8", "2:3", "3:2", "3:4", "4:1", "4:3", "4:5", "5:4", "8:1", "9:16", "16:9", "21:9"
Aspect ratio is not part of the prompt -- it's a config parameter.
imageSize
"512" (Flash only), "1K" (default), "2K", "4K" -- must use uppercase K.
Response modalities
["Image"]-- image-only response["TEXT", "IMAGE"]-- model can return text alongside the image
Post-processing with sharp
The getcirclesorg scripts use sharp to resize and compress after generation:
import sharp from "sharp";
await sharp(imageBuffer)
.resize(600, 400, { fit: "cover" })
.jpeg({ quality: 85 })
.toFile("output.jpg");
Our prompts
See image-prompts.md for the full set of prompts and design tokens for DeepSpace/TracePlot assets.