Features Pricing Docs Blog Try Demo Log In Sign Up
Back to Blog

How to take website screenshots with cURL (one-line examples)

Capture website screenshots straight from your terminal. One-line cURL examples for full-page captures, mobile viewports, dark mode, PDF export, thumbnails, and cookie banner removal — no SDKs, no libraries, just cURL.

How to take website screenshots with cURL (one-line examples)

Sometimes you don't want to spin up a Node.js project or write a Python script just to grab a single screenshot. You need a quick capture from the terminal — test an API parameter, check an edge case, take a before/after shot. Or maybe you're SSHed into a server and cURL is all you've got.

I use cURL all the time when debugging screenshotrun — testing parameters, poking at edge cases, grabbing quick screenshots. So I put together the examples that actually live in my shell history.

What this article covers

Every example here is a single cURL command. Copy it, swap in your API key, run it. I'll go through the most common scenarios: basic capture, full-page, mobile viewport, dark mode, PDF, thumbnails, CSS injection, hiding cookie banners, and rendering raw HTML. Each command can be saved as a shell alias or dropped into a bash script.

Get your API key

If you don't have a screenshotrun account yet, sign up here. The free plan gives you 300 screenshots per month, no credit card needed.

Once you're in, head to Dashboard → API Keys and create a new key:

The API Keys page in the screenshotrun dashboard — the Create New Key button and a masked key

It will look something like this:

sk_live_aBcDeFgHiJkLmNoPqRsTuVwXyZ0123456789ab

I usually export it as an environment variable so I don't have to paste it every time:

bash

export SCREENSHOTRUN_KEY="sk_live_your_key_here"

Now every example below uses $SCREENSHOTRUN_KEY instead of the raw token. Cleaner, and your key stays out of your shell history.

Set up a working directory

Before we start — let's create a dedicated folder for our experiments. All screenshots will land here, so you won't have to hunt for them across your disk later:

bash

mkdir -p ~/screenshots-curl && cd ~/screenshots-curl

One command — folder created and we're already inside. Every example below saves files to the current directory, so everything stays in one place.

Important: the API returns JSON, not a file

One thing that might trip you up before we get going. When you send a request to create a screenshot, the API doesn't return an image directly. You get back JSON with the screenshot ID and its status. The actual file has to be downloaded with a separate command using that ID.

So the workflow is always three steps: create the screenshot → check the status → download the image. Let me show you how that looks.

Basic screenshot

The simplest capture — just a URL with default settings (1280×800, PNG, desktop viewport):

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://github.com"}'

You'll get back JSON with a pending status — the screenshot is queued. The API works asynchronously, and processing takes a few seconds.

Now grab the id from the response and check whether it's ready:

bash

curl -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  https://screenshotrun.com/api/v1/screenshots/SCREENSHOT_ID

Once status changes to completed, download the image with a separate command — notice the /image at the end of the URL:

bash

curl -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  https://screenshotrun.com/api/v1/screenshots/SCREENSHOT_ID/image \
  -o screenshot.png

Only this third command saves a file to disk. The first two just deal with JSON data.

Terminal showing three commands: creating a screenshot, checking status, and downloading screenshot.png

Three steps: create, check, download. If you want it shorter — I'll show a bash script at the end that chains all three into a single run.

Full-page screenshot

Most websites extend well beyond the viewport. Add full_page to capture the entire scrollable page:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://github.com/trending", "full_page": true}'

The API stitches the page into one tall image. I use this a lot when I need to archive a landing page or check how a long article looks end to end. If the page has lazy-loaded content, throw in a delay of 2-3 seconds so everything loads before the capture fires.

I covered the nuances of full-page captures in more detail in my post about taking screenshots with Node.js.

Mobile screenshot

Switch the viewport to phone size with the device parameter:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://stripe.com", "device": "mobile"}'

Three options: desktop (default), tablet, and mobile. Each one sets the right viewport size and user-agent, so the site actually renders its responsive layout — not just a squished desktop version.

You can also set a custom viewport if the presets don't fit:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://stripe.com", "width": 375, "height": 812}'

That's an iPhone X viewport. Comes in handy when your designer asks "how does it look on this exact device?"

Mobile screenshot of stripe.com

Dark mode

Some sites support prefers-color-scheme: dark. You can trigger it with dark_mode:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://github.com", "dark_mode": true}'

Not every site will respond to this — the site needs to support dark mode via CSS media queries. But for those that do, you get the real dark theme with no browser extensions or workarounds.

I wrote about generating OG images with the screenshot API — dark OG images tend to stand out more in social feeds.

WebP and JPEG (smaller files, faster delivery)

PNG is the default, but if you don't need lossless quality, switch to WebP or JPEG for much smaller files:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "format": "webp", "quality": 70}'

The quality parameter (1-100) controls compression. I usually go with 70-80 for WebP — file size drops by 60-70% compared to PNG with barely any visible difference. JPEG works the same way:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "format": "jpeg", "quality": 75}'

If you're building website preview thumbnails for a link directory, WebP is the obvious pick. Smaller payloads mean faster page loads for your users.

Thumbnail

Capture at full resolution and resize down to a target width. The API handles resizing server-side, so you're not downloading a huge image just to shrink it yourself:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "format": "webp", "resize_width": 320}'

Aspect ratio is preserved automatically. You can set resize_height too, or both — the image fits within those dimensions without stretching.

That's the same approach I described in my article on adding preview thumbnails to link directories, just stripped down to a single cURL command.

Save as PDF

Swap the format to pdf and you get a paginated document instead of an image:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com/report", "format": "pdf", "pdf_landscape": true, "pdf_page_format": "A4"}'

You can control margins too:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com/invoice",
    "format": "pdf",
    "pdf_page_format": "Letter",
    "pdf_margin_top": "20mm",
    "pdf_margin_bottom": "20mm",
    "pdf_margin_left": "15mm",
    "pdf_margin_right": "15mm"
  }'

PDF captures work well for archiving reports, generating invoices from HTML templates, or saving articles for offline reading. Page breaks follow Chrome's print layout engine, so what you see in Chrome's print preview is what you get.

Hide cookie banners and popups

Cookie consent banners ruin screenshots. There are two ways to deal with them.

First, the simple flag — block_cookies is enabled by default, so the API already tries to block common consent dialogs:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://bbc.com", "block_cookies": true}'

But some sites have custom banners that the auto-blocker doesn't catch. For those, use hide_selectors to target specific elements by CSS selector:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "hide_selectors": [".cookie-banner", "#newsletter-popup", ".ads-container"]
  }'

Or if the banner has an "Accept" button, click it before the capture:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "click_selector": "#accept-cookies"}'

I spent a lot of time getting cookie blocking right in screenshotrun. European sites are the worst — many have multi-step consent flows hidden behind iframes. The auto-blocker handles most of them, but hide_selectors is your escape hatch for the stubborn ones.

CSS injection

Need to tweak the page before capturing? Inject CSS directly:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "css": "header { display: none !important; } .sidebar { display: none !important; } body { background: #ffffff; }"
  }'

I mostly use this for documentation screenshots — strip out the nav, sidebars, footers, anything that distracts from the main content. The CSS gets injected right before the capture.

Render raw HTML (no URL needed)

You don't always need a live URL. Pass raw HTML and the API renders it directly:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "html": "<html><body style=\"padding:40px;font-family:sans-serif;background:#1a1a2e;color:#eee\"><h1>Monthly Report</h1><p>Generated on April 1, 2026</p></body></html>",
    "width": 800,
    "height": 400
  }'

That's how I build dynamic OG images — create an HTML template, inject data, render it to an image. I covered the details in my post about generating OG images with a screenshot API. The html parameter accepts up to 500,000 characters, so you can pass fairly complex templates.

Retina (2x resolution)

For a high-DPI screenshot, throw in the retina flag:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "retina": true, "format": "webp"}'

The output image will be 2x the viewport dimensions — a 1280×800 viewport produces a 2560×1600 image. Text and vector graphics come out noticeably sharper. I'd pair it with WebP though, because retina PNGs get large fast.

Caching to avoid paying for duplicates

If you're capturing the same URL repeatedly and the content doesn't change much, set a cache TTL:

bash

curl -X POST https://screenshotrun.com/api/v1/screenshots \
  -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "cache_ttl": 3600}'

The cache_ttl value is in seconds. This tells the API: "if you've already captured this exact URL with these exact options in the last hour, return the cached result." No new render, no credit spent. I wrote a whole article on screenshot caching strategies if you want to go deeper.

Quick capture via GET

Every example above uses POST with a JSON body. But there's also a GET endpoint that takes parameters as query strings:

bash

curl -H "Authorization: Bearer $SCREENSHOTRUN_KEY" \
  "https://screenshotrun.com/api/v1/screenshots/capture?url=https://example.com&format=webp&full_page=true"

Same result, shorter syntax. The GET endpoint is handy for testing in the browser address bar or for simple integrations where building a JSON body feels like overkill.

Bonus: all-in-one bash script

Here's a small script that creates a screenshot, waits for it to finish, and downloads the image — all in one go:

bash

#!/bin/bash
URL="${1:-https://example.com}"
KEY="$SCREENSHOTRUN_KEY"
API="https://screenshotrun.com/api/v1"

# Create the screenshot and extract the ID
ID=$(curl -s -X POST "$API/screenshots" \
  -H "Authorization: Bearer $KEY" \
  -H "Content-Type: application/json" \
  -d "{\"url\": \"$URL\", \"format\": \"webp\", \"full_page\": true}" \
  | grep -o '"id":"[^"]*"' | head -1 | cut -d'"' -f4)

echo "Screenshot ID: $ID"

# Poll until ready
while true; do
  STATUS=$(curl -s -H "Authorization: Bearer $KEY" "$API/screenshots/$ID" \
    | grep -o '"status":"[^"]*"' | head -1 | cut -d'"' -f4)
  echo "Status: $STATUS"
  [ "$STATUS" = "completed" ] && break
  [ "$STATUS" = "failed" ] && echo "Failed!" && exit 1
  sleep 2
done

# Download
curl -s -H "Authorization: Bearer $KEY" \
  "$API/screenshots/$ID/image" -o "screenshot.webp"

echo "Saved: screenshot.webp"

Save it as screenshot.sh, make it executable (chmod +x screenshot.sh), and run:

bash

./screenshot.sh https://github.com

Not the prettiest script, but it gets the job done. For anything more complex I'd reach for Python or Node.js.

Quick reference table

Parameter

What it does

Example value

url

URL to capture

"https://example.com"

html

Render raw HTML (instead of URL)

"<html>...</html>"

full_page

Capture the full scrollable page

true

device

Viewport preset

"mobile", "tablet"

width / height

Custom viewport size

375 / 812

format

Output format

"png", "webp", "jpeg", "pdf"

quality

Compression (JPEG/WebP)

70

dark_mode

Trigger prefers-color-scheme: dark

true

retina

2x resolution

true

resize_width

Resize the output image

320

delay

Wait before capture (seconds)

3

cache_ttl

Return cached result if available

3600

block_cookies

Auto-block cookie banners

true (default)

hide_selectors

Hide elements by CSS selector

[".banner", "#popup"]

click_selector

Click an element before capture

"#accept-cookies"

css

Inject custom CSS

"header { display: none; }"

pdf_landscape

Landscape PDF

true

pdf_page_format

PDF page size

"A4", "Letter"

Full API docs here if you want the complete list.

That's it

cURL is often the fastest way to test a screenshot API — no dependencies, no boilerplate, just a command and an image. I keep most of these as shell aliases and reach for them daily.

If you want to try these examples, grab a free API key — 300 screenshots a month, no credit card. And if you're building something more involved, check out the PHP, Python, and Node.js guides for more structured examples.

Thanks for reading — hope this saves you some time.

More from the blog

View all posts
How to cache screenshots and stop paying for the same capture twice

How to cache screenshots and stop paying for the same capture twice

About 30-40% of screenshot API requests are duplicates — same URL, same parameters, same result. Here's how I built caching into screenshotrun and three strategies you can use on your side to cut your API bill and speed up delivery: TTL-based cache, content hashing, and event-driven refresh via webhooks. Code examples in PHP/Laravel and Node.js included.

Read more →
When a Screenshot Tells You What a Log Can't: 5 Situations That Matter

When a Screenshot Tells You What a Log Can't: 5 Situations That Matter

Logs record what the system did. Screenshots show what the user saw. The difference seems obvious — but a lot of teams quietly lose this information without noticing. Here are five situations in product, marketing, and client work where a screenshot gives you the answer a log simply can't.

Read more →
How to add website preview thumbnails to your link directory with a screenshot API

How to add website preview thumbnails to your link directory with a screenshot API

Learn how to automatically generate website preview thumbnails for your link directory using a screenshot API. Step-by-step PHP and Node.js code with caching, real output screenshots, and tips for handling cookie banners and large directories.

Read more →