Features Pricing Docs Blog Try Demo Log In Sign Up
Back to Blog

How to add website preview thumbnails to your link directory with a screenshot API

Learn how to automatically generate website preview thumbnails for your link directory using a screenshot API. Step-by-step PHP and Node.js code with caching, real output screenshots, and tips for handling cookie banners and large directories.

How to add website preview thumbnails to your link directory with a screenshot API

If you run a link directory, a resource list, or any kind of bookmark-style app, you've probably run into this problem: you open the page and it's just a wall of text links that nobody cares about. People scroll past URLs they can't evaluate at a glance, there's nothing for their eyes to grab onto, so they don't click because they can't see what's behind the link.

The obvious fix is to show a thumbnail of each website next to its URL. Popular sites like Product Hunt do this, Notion bookmarks work the same way. Link previews in Slack and Discord follow the same idea: there's a thumbnail, and the information is easier to take in. The only problem is that when you try to do this yourself, it turns out to be harder than it looks. Below I'll explain why and show the simplest working approach.

In this tutorial I'll show how to do it with a screenshot API: one HTTP request, one ready-made thumbnail, no need to set up anything on your server. Step-by-step code for PHP and Node.js, with real output at each step.

Why link directories need visual previews

People process images much faster than text, roughly 60,000 times faster to be exact. And that's not a marketing gimmick, it comes from visual perception research at the University of Minnesota. When someone lands on your directory page, they scan with their eyes looking for something relevant. A URL like https://example.com/tools/analytics-dashboard tells them almost nothing, but a 300x200 thumbnail of the actual page tells them everything in about half a second.

There's also a trust factor. I've noticed that links with a visible preview just feel safer to click. You can see the page is real, has actual content, and isn't some parked domain full of ads. If your directory accepts user submissions, thumbnails alone are enough to cut down on "is this link even real?" complaints.

But honestly, the main reason is even simpler: thumbnails make your directory look like a product someone actually cares about, not a text file uploaded to shared hosting in 2009. If you're monetizing through ads or premium listings, this matters more than you'd think.

Why not self-hosted Puppeteer

The first thing that comes to mind is running a headless browser on your own server. Install Node.js, install Puppeteer, write a script that visits each URL and takes a screenshot. This actually works, and we've already covered it: there's a step-by-step guide for Node.js, for Python, and for PHP.

But the problems start with scaling and maintenance. Each Puppeteer instance eats 200-400 MB of RAM. If you're generating thumbnails for 50 links at the same time, that's 10-20 GB just for screenshots. Headless Chrome crashes silently when it runs out of memory. Pages with cookie consent banners (basically any EU-facing website) render with a giant popup covering the content. Sites behind Cloudflare anti-bot protection return challenge pages instead of the actual content. You end up spending more time maintaining the screenshot infrastructure than building your actual product.

If you have a DevOps team and predictable high volumes, self-hosting makes sense. For everyone else, it's overkill just to generate link thumbnails. So in this article, we'll take a different path and use a screenshot API.

Self-hosted vs API: quick comparison

Self-hosted PuppeteerScreenshot API
Setup time2-4 hours10 minutes
Server requirements2+ GB RAM, Node.js, ChromeNone
Cookie banner handlingManual (CSS injection)Built-in parameter
Anti-bot protection (Cloudflare etc.)Fails without proxiesHandled by the API
Cost at 1K thumbnails/month$10-20/month (VPS)$0 (free tier) to $9/month
ReliabilityMedium (crashes, memory leaks)High

Step-by-step guide: adding thumbnails with ScreenshotRun API

Let's build this for real. The main tutorial is in PHP since most link directories run on PHP-based stacks (WordPress, Laravel, custom PHP). A Node.js version follows below.

Step 1: sign up and get your API key

Head to screenshotrun.com/register and create a free account. The free tier gives you 300 API requests, which is enough for testing and small directories.

After signing up, go to the API Keys section in your dashboard and copy your key. You'll need to pass it as a Bearer token with every request.

ScreenshotRun dashboard showing the API Keys section with a key displayed and a Create New Key button

Step 2: capture your first screenshot

Let's start with a simple cURL command to make sure everything works before writing any application code. Open your terminal and run:

curl -G "https://screenshotrun.com/api/v1/screenshots/capture" \
  --data-urlencode "url=https://github.com" \
  -d "width=1280" \
  -d "height=800" \
  -d "format=png" \
  -d "response_type=image" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --output github-thumbnail.png

Here we're sending a GET request to the ScreenshotRun API with a few parameters: the target URL, the viewport size (1280x800 mimics a standard laptop screen), PNG format, and response_type=image which tells the API to return raw binary image data instead of a JSON response. The result gets saved to github-thumbnail.png in your current directory.

Open the file and you should see a clean screenshot of GitHub's homepage.

Terminal showing the executed cURL command and the resulting github-thumbnail.png file with a screenshot of GitHub's homepage

If you want a smaller image that loads faster on your directory page, change the width parameter. For thumbnails, 640x400 or even 480x300 works well:

curl -G "https://screenshotrun.com/api/v1/screenshots/capture" \
  --data-urlencode "url=https://github.com" \
  -d "width=640" \
  -d "height=400" \
  -d "format=png" \
  -d "response_type=image" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --output github-thumb-small.png

Smaller 640x400 GitHub screenshot in VS Code with the cURL command visible in the terminal

Step 3: capture a screenshot from PHP

Now let's do the same thing from PHP code. Create a file called capture.php:

<?php

$apiKey = 'YOUR_API_KEY';
$targetUrl = 'https://laravel.com';

$params = http_build_query([
    'url'           => $targetUrl,
    'width'         => 1280,
    'height'        => 800,
    'format'        => 'png',
    'response_type' => 'image',
]);

$endpoint = 'https://screenshotrun.com/api/v1/screenshots/capture?' . $params;

$ch = curl_init($endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 60);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
    'Authorization: Bearer ' . $apiKey,
]);

$imageData = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);

if ($httpCode === 200 && $imageData) {
    file_put_contents('laravel-thumbnail.png', $imageData);
    echo "Saved: laravel-thumbnail.png (" . strlen($imageData) . " bytes)\n";
} else {
    echo "Error: HTTP $httpCode\n";
}

Run it from the terminal:

php capture.php

You'll see something like Saved: laravel-thumbnail.png (235425 bytes) in the terminal, and a file with a screenshot of laravel.com will appear in your project folder.

Result of running php capture.php: terminal showing the Saved output, VS Code file panel with laravel-thumbnail.png, and the Laravel homepage screenshot open on the right

Step 4: cache thumbnails to save API requests

Every API call uses one request from your quota. If your directory page re-fetches the same thumbnails on every page load, you'll burn through your free tier in a day. The fix is simple: save each thumbnail locally and check if it already exists before calling the API.

Here's a function that wraps the capture logic with file-based caching:

<?php

function getThumbnail(string $url, string $apiKey, string $cacheDir = './thumbnails'): ?string
{
    if (!is_dir($cacheDir)) {
        mkdir($cacheDir, 0755, true);
    }

    // Use a hash of the URL as the filename
    $filename = md5($url) . '.png';
    $filepath = $cacheDir . '/' . $filename;

    // Return cached version if it exists
    if (file_exists($filepath)) {
        return $filepath;
    }

    // Capture a fresh screenshot
    $params = http_build_query([
        'url'           => $url,
        'width'         => 1280,
        'height'        => 800,
        'format'        => 'png',
        'response_type' => 'image',
    ]);

    $endpoint = 'https://screenshotrun.com/api/v1/screenshots/capture?' . $params;

    $ch = curl_init($endpoint);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_TIMEOUT, 60);
    curl_setopt($ch, CURLOPT_HTTPHEADER, [
        'Authorization: Bearer ' . $apiKey,
    ]);

    $imageData = curl_exec($ch);
    $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
    curl_close($ch);

    if ($httpCode !== 200 || !$imageData) {
        return null;
    }

    file_put_contents($filepath, $imageData);
    return $filepath;
}

// Usage
$apiKey = 'YOUR_API_KEY';

$links = [
    'https://github.com',
    'https://laravel.com',
    'https://tailwindcss.com',
    'https://stackoverflow.com',
];

foreach ($links as $url) {
    $path = getThumbnail($url, $apiKey);
    if ($path) {
        echo "OK: $url -> $path\n";
    } else {
        echo "FAIL: $url\n";
    }
}

Run it once and all four thumbnails will be captured and saved to the thumbnails/ folder with MD5 hashes as filenames. Run it again and they'll load instantly from cache, zero API calls used.

The thumbnails folder with hash-named files, terminal showing OK for github.com, laravel.com, tailwindcss.com and FAIL for stackoverflow.com, with a cached GitHub screenshot open on the right

Step 5: putting it all together in one page

Now let's build a complete working example. Create a file called directory-demo.php that combines the getThumbnail function, a list of links, and the HTML output all in one page:

<?php

function getThumbnail(string $url, string $apiKey, string $cacheDir = './thumbnails'): ?string
{
    if (!is_dir($cacheDir)) {
        mkdir($cacheDir, 0755, true);
    }

    $filename = md5($url) . '.png';
    $filepath = $cacheDir . '/' . $filename;

    if (file_exists($filepath)) {
        return $filepath;
    }

    $params = http_build_query([
        'url'           => $url,
        'width'         => 1280,
        'height'        => 800,
        'format'        => 'png',
        'response_type' => 'image',
    ]);

    $endpoint = 'https://screenshotrun.com/api/v1/screenshots/capture?' . $params;

    $ch = curl_init($endpoint);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_TIMEOUT, 60);
    curl_setopt($ch, CURLOPT_HTTPHEADER, [
        'Authorization: Bearer ' . $apiKey,
    ]);

    $imageData = curl_exec($ch);
    $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
    curl_close($ch);

    if ($httpCode !== 200 || !$imageData) {
        return null;
    }

    file_put_contents($filepath, $imageData);
    return $filepath;
}

$apiKey = 'YOUR_API_KEY';

$links = [
    'https://github.com',
    'https://laravel.com',
    'https://tailwindcss.com',
];

?>
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Link Directory with Thumbnails</title>
    <style>
        * { margin: 0; padding: 0; box-sizing: border-box; }

        body {
            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
            background: #f8fafc;
            color: #1e293b;
            padding: 2rem;
        }

        h1 { font-size: 1.75rem; margin-bottom: 0.5rem; }
        .subtitle { color: #64748b; margin-bottom: 2rem; }

        .directory-grid {
            display: grid;
            grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
            gap: 1.5rem;
            padding: 2rem 0;
        }

        .directory-card {
            border: 1px solid #e2e8f0;
            border-radius: 8px;
            overflow: hidden;
            text-decoration: none;
            background: #fff;
            transition: box-shadow 0.2s;
        }

        .directory-card:hover {
            box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
        }

        .directory-card img {
            width: 100%;
            height: auto;
            display: block;
        }

        .directory-card .placeholder {
            width: 100%;
            height: 200px;
            display: flex;
            align-items: center;
            justify-content: center;
            background: #f1f5f9;
            color: #94a3b8;
            font-size: 0.875rem;
        }

        .directory-url {
            display: block;
            padding: 0.75rem 1rem;
            color: #475569;
            font-size: 0.875rem;
        }
    </style>
</head>
<body>

<h1>Link Directory</h1>
<p class="subtitle">Website previews generated automatically via ScreenshotRun API</p>

<div class="directory-grid">
    <?php foreach ($links as $url): ?>
        <?php $thumb = getThumbnail($url, $apiKey); ?>
        <a href="<?= htmlspecialchars($url) ?>" class="directory-card" target="_blank">
            <?php if ($thumb): ?>
                <img
                    src="<?= htmlspecialchars($thumb) ?>"
                    alt="Preview of <?= htmlspecialchars($url) ?>"
                    loading="lazy"
                >
            <?php else: ?>
                <div class="placeholder">Preview unavailable</div>
            <?php endif; ?>
            <span class="directory-url"><?= htmlspecialchars(parse_url($url, PHP_URL_HOST)) ?></span>
        </a>
    <?php endforeach; ?>
</div>

</body>
</html>

Plug in your API key, drop the file into your project root, and open it in the browser. Here's what I got:

Browser showing the Link Directory page with three cards displaying screenshots of github.com, laravel.com, and tailwindcss.com in a responsive grid

This is a test example so you can see how everything works together. In a real project you wouldn't store a list of links in an array right in the file. You'd integrate the getThumbnail function into your existing architecture: pull URLs from a database, trigger thumbnail generation through a queue or background job, store file paths in your model. The capture and caching logic stays the same, only where the data comes from and where the result goes will be different.

Node.js version

If your directory runs on Node.js (Express, Next.js, or anything else), here's the equivalent code using fetch and the filesystem module:

import { writeFileSync, existsSync, mkdirSync } from 'fs';
import { createHash } from 'crypto';

const API_KEY = 'YOUR_API_KEY';
const CACHE_DIR = './thumbnails';

async function getThumbnail(url) {
  if (!existsSync(CACHE_DIR)) {
    mkdirSync(CACHE_DIR, { recursive: true });
  }

  const hash = createHash('md5').update(url).digest('hex');
  const filepath = `${CACHE_DIR}/${hash}.png`;

  if (existsSync(filepath)) {
    return filepath;
  }

  const params = new URLSearchParams({
    url,
    width: '1280',
    height: '800',
    format: 'png',
    response_type: 'image',
  });

  const response = await fetch(
    `https://screenshotrun.com/api/v1/screenshots/capture?${params}`,
    {
      headers: { Authorization: `Bearer ${API_KEY}` },
      signal: AbortSignal.timeout(60000),
    }
  );

  if (!response.ok) {
    console.error(`Failed: ${url} (HTTP ${response.status})`);
    return null;
  }

  const buffer = Buffer.from(await response.arrayBuffer());
  writeFileSync(filepath, buffer);
  return filepath;
}

// Capture thumbnails for a list of URLs
const links = [
  'https://github.com',
  'https://laravel.com',
  'https://tailwindcss.com',
  'https://stackoverflow.com',
];

for (const url of links) {
  const path = await getThumbnail(url);
  console.log(path ? `OK: ${url} -> ${path}` : `FAIL: ${url}`);
}

Save this as capture.mjs (the .mjs extension enables ES module imports) and run it with node capture.mjs. You need Node.js 18+ for the built-in fetch and AbortSignal.timeout.

Common problems and how to handle them

Once you get the basic version working, you'll run into a few edge cases. Here's how to deal with them:

  1. Cookie banners blocking the page. A lot of European websites show a full-screen GDPR popup that covers the actual content. ScreenshotRun supports a hide_selectors parameter that removes specific CSS elements before capturing. For most cookie banners, hide_selectors=.cookie-banner,.consent-popup,#onetrust-banner-sdk covers the common frameworks. You can also use click_selector to click an "Accept" button before the capture.

  2. Slow-loading pages. Some sites take a few seconds to render JavaScript content (SPAs, dashboards, pages with heavy animations). Add a delay=3000 parameter (value in milliseconds) to wait before capturing. By default the screenshot is taken right after the page fires its load event, which is too early for client-side rendered apps.

  3. Broken or parked domains. Not every URL in a user-submitted directory points to a real website. The API will return an error or a screenshot of an error page. Check the HTTP status code in your code and fall back to a placeholder image when the capture fails. A simple gray card with a "Preview unavailable" label works fine.

  4. Thumbnail freshness. Website designs change over time. A thumbnail from six months ago might look nothing like the current site. Set up a simple cron job or scheduled task that deletes cached files older than 30 days (or 7 days for directories where freshness matters). The getThumbnail function above will automatically re-capture them on the next page load.

  5. Rate limiting and large directories. If your directory has 500+ links and you need to capture all of them at once, don't fire 500 concurrent API requests. Process them in batches of 5-10 with a short delay between batches. A queue-based approach (Laravel Horizon, BullMQ, or even a simple sleep(1) between batches) keeps things stable.

When to refresh your thumbnails

There's no single right answer here. It depends on how often the sites in your directory change and how much you care about accuracy.

For most directories, re-capturing thumbnails once a month is enough. Websites don't redesign their homepages every week. A monthly cron job that clears the cache folder and lets thumbnails regenerate on demand keeps things fresh without burning through your API quota.

If you run a directory where listings change frequently (job boards, real estate, e-commerce aggregators), weekly or even daily recapture might make sense. In that case, a background job that processes a batch of URLs each night is better than regenerating everything on page load.

Another option is to trigger a recapture when a user updates their listing. If someone submits a new URL or edits an existing one, delete the cached thumbnail for that URL. The next visitor to the directory page will trigger a fresh capture automatically.

Wrapping up

That's it. Sign up, add the getThumbnail function, wire up the grid. About 20 minutes of work, and your directory stops looking like a homework assignment from 2004.

The code from this tutorial works as-is. Copy whichever version fits your stack, plug in your API key, deploy. The ScreenshotRun docs cover things I didn't get into here: full-page screenshots, mobile viewports, custom CSS injection, and a few others worth looking at.

Want to try it before writing any code? The playground lets you test different URLs and settings right in your browser. Free tier is 300 requests, no credit card needed.

More from the blog

View all posts
How to take a website screenshot with Python

How to take a website screenshot with Python

Learn how to capture website screenshots with Python using three approaches: Selenium, Playwright, and a screenshot API. Step-by-step code, real output screenshots, full-page captures, mobile viewports, and honest comparison of pros and cons for each method.

Read more →
5 Ways Developers Use Screenshot APIs (Beyond Simple Page Captures)

5 Ways Developers Use Screenshot APIs (Beyond Simple Page Captures)

Most people think of screenshot APIs as a simple URL-to-image tool. But developers who've actually integrated one into their stack use it for OG image generation, link preview thumbnails, visual regression testing, compliance archiving, and competitor monitoring. Here are five real scenarios where a screenshot API saves hours of work. Target Keywords: screenshot API use cases, website screenshot API, automated screenshots for developers, screenshot API for OG images, visual regression testing sc

Read more →
How to take a website screenshot with Node.js

How to take a website screenshot with Node.js

Learn how to capture website screenshots in Node.js using Playwright, Puppeteer, and a screenshot API. Step-by-step code examples with real output: full-page captures, custom viewports, mobile emulation, cookie banner handling, and production gotchas.

Read more →