How to Configure Image Processing for WordPress on Shared Hosting
Tune StaticQ Media's queue-based image processing for shared hosting — conservative batch sizes, Cloudflare Image Resizing, and Cloud Only mode to stay within your host's limits.
Shared hosting has tight CPU limits, low memory ceilings, and aggressive process killers. Most WordPress image optimization plugins hit those limits hard — they try to resize and compress images during the upload request, your server runs out of execution time, and the process dies mid-conversion. You end up with half-processed images and a media library full of errors.
StaticQ Media was designed for exactly this scenario. It never processes images during the upload request itself. Instead, it queues them and processes in small batches via WordPress cron — separate, short-lived requests that fit comfortably within shared hosting limits. But you’ll want to tune the settings for your specific host.
Video: How to Configure Image Processing for WordPress on Shared Hosting
Coming soon
Prefer to watch? The video above walks through every step below.
Why Shared Hosting Is a Challenge for Image Processing
Image resizing and WebP conversion are CPU-intensive operations. Resizing a single 4000x3000 JPEG to five thumbnail sizes and generating WebP variants for each can take 2-5 seconds of CPU time and 128-256 MB of memory. Multiply that by a batch of images and you quickly hit the limits that shared hosting imposes:
- max_execution_time: typically 30-60 seconds on shared hosts. Some enforce 15 seconds.
- memory_limit: often 128 MB or 256 MB. Image processing with GD or Imagick can spike well above this for large originals.
- CPU throttling: many shared hosts throttle or kill processes that consume too much CPU for too long. Your cron job might get terminated mid-batch with no error message.
- Concurrent process limits: some hosts limit the number of PHP processes you can run simultaneously. A long-running image processing task blocks one of those slots.
Plugins that process images synchronously during the upload request are fighting all four of these limits at once. The upload request needs to resize, compress, convert, and possibly upload to a CDN — all before the server kills it.
How StaticQ’s Queue Avoids the Problem
StaticQ separates the upload from the processing entirely:
-
When you upload an image, StaticQ creates a database record and adds the image to the processing queue. The upload request completes immediately — no resizing, no WebP conversion, no cloud upload during the HTTP request.
-
WordPress cron fires on the next page load (or via a real cron job if configured). StaticQ picks up a small batch of queued images and processes them one at a time.
-
Each cron tick has its own execution time and memory allocation. If you set the batch size to 3 images and the max time to 20 seconds, StaticQ processes up to 3 images (or stops after 20 seconds, whichever comes first) and then exits cleanly. The remaining images wait for the next tick.
This means the upload request is fast, the processing runs in controlled bursts, and no single request exceeds your host’s limits. Even if a cron tick gets killed, the unfinished images stay in the queue and get picked up next time.
Step 1: Install StaticQ Media and Configure R2
If you haven’t already set up StaticQ with your Cloudflare R2 bucket, follow the full R2 setup guide. The short version: install the plugin, create an R2 bucket in your Cloudflare dashboard, enter the API credentials in StaticQ > Settings, and test the connection.
Screenshot: StaticQ settings with R2 connection test passing
Step 2: Set Conservative Batch Settings
Open your wp-config.php file (via your host’s file manager or FTP) and add these constants before the “That’s all, stop editing!” line:
// Process 3 images per cron tick (conservative for shared hosting)
define('SQ_CRON_BATCH_SIZE', 3);
// Stop processing after 20 seconds (leaves headroom for the host's time limit)
define('SQ_CRON_MAX_SECONDS', 20);
// Scan 50 attachments per batch during Media Library Scanner runs
define('SQ_SCAN_BATCH_SIZE', 50);
Why these numbers:
-
SQ_CRON_BATCH_SIZE = 3: processing 3 images at a time keeps CPU usage brief. Each image generates multiple thumbnails and WebP variants, so 3 images might mean 15-30 file operations. That’s enough to make steady progress without triggering CPU throttling.
-
SQ_CRON_MAX_SECONDS = 20: most shared hosts allow 30 seconds of execution time. Setting the limit to 20 gives StaticQ 20 seconds to process and leaves 10 seconds of headroom for WordPress overhead. If your host allows 60 seconds, you can raise this to 40.
-
SQ_SCAN_BATCH_SIZE = 50: the Media Library Scanner checks attachments in batches via AJAX. 50 per batch is gentle on memory while still completing scans at a reasonable pace.
Screenshot: wp-config.php with StaticQ batch constants added
Step 3: Consider Cloudflare Image Resizing
Here’s where shared hosting gets a significant advantage. If your domain is proxied through Cloudflare and you have access to Image Resizing (available on Cloudflare Pro plans and above), you can offload all resize and WebP conversion work to Cloudflare’s edge network.
With Cloudflare Image Resizing enabled in StaticQ’s settings, your server does none of the heavy lifting:
- No local resize operations — Cloudflare handles all thumbnail generation at the edge
- No local WebP conversion — Cloudflare converts formats on the fly
- Minimal memory usage — your server only handles the original file upload and the API call to R2
This effectively eliminates the CPU and memory problem entirely. Your shared hosting account handles the lightweight tasks (database records, R2 uploads), and Cloudflare’s infrastructure handles the compute-intensive tasks (resize, format conversion).
If your host’s PHP installation doesn’t have GD with WebP support — which is common on older shared hosting setups — Cloudflare Image Resizing also solves that problem.
Screenshot: Cloudflare Image Resizing toggle in StaticQ settings
Step 4: Choose Cloud Only Storage Mode
In StaticQ > Settings, set the storage mode to Cloud Only. This tells StaticQ to delete local copies of processed images after they’ve been successfully uploaded to R2.
On shared hosting, disk space is often as limited as CPU. Cloud Only mode frees up local storage — your wp-content/uploads/ directory stays lean because processed files live in R2 only. This also makes your backups smaller and faster.
The original uploaded file is kept locally until R2 upload confirms. Only then is the local copy removed. If the R2 upload fails for any reason, the local file stays put.
Screenshot: Storage mode set to Cloud Only in settings
Step 5: Monitor Processing Progress
Go to StaticQ > Media Manager to monitor your queue. You’ll see:
- How many images are queued for processing
- How many have been processed
- The current processing status
On shared hosting with a batch size of 3, a library of 500 images will take roughly 170 cron ticks to process. If WordPress cron fires every 60 seconds (the default), that’s about 3 hours of background processing. You don’t need to keep the browser open — cron runs on its own whenever anyone visits your site.
If you want processing to happen faster, consider setting up a real system cron to fire WordPress cron more frequently. Most shared hosts let you add a cron job in cPanel:
*/1 * * * * wget -q -O /dev/null https://yourdomain.com/wp-cron.php
This fires WordPress cron every minute instead of waiting for a visitor to trigger it.
Screenshot: Media Manager showing queue progress on shared hosting
Troubleshooting Common Issues
Cron seems to stall — images aren’t processing: WordPress cron only fires when someone visits your site. If your site has low traffic, cron might fire infrequently. Set up a real system cron (above) or install a plugin like WP Crontrol to monitor and manually trigger cron events.
Images fail to process — timeout errors:
Your batch size is too high for your host’s execution time limit. Reduce SQ_CRON_BATCH_SIZE to 2 or even 1. Also check that SQ_CRON_MAX_SECONDS is well below your host’s max_execution_time value.
Out of memory errors during processing:
Large original images (4000+ pixels wide) consume significant memory during resize operations. Two options: reduce SQ_CRON_BATCH_SIZE to 1 so only one image is in memory at a time, or enable Cloudflare Image Resizing to offload the resize work entirely.
WebP conversion fails:
Your server’s GD library may not support WebP. Check by looking at the PHP info page (add <?php phpinfo(); ?> to a temporary file). If WebP Support isn’t listed under GD, enable Cloudflare Image Resizing instead — it handles WebP at the edge regardless of your server’s capabilities.
Batch Size Guidelines by Hosting Tier
These are starting points — your specific host may differ:
| Hosting Type | SQ_CRON_BATCH_SIZE | SQ_CRON_MAX_SECONDS | Notes |
|---|---|---|---|
| Budget shared (Bluehost, HostGator basic) | 1-2 | 15 | Very tight limits; consider CF Image Resizing |
| Mid-tier shared (SiteGround, A2 Hosting) | 3-5 | 20-25 | Good balance of speed and safety |
| Managed WordPress (Cloudways, Kinsta) | 5-8 | 30-40 | More headroom; test and increase gradually |
| VPS / Dedicated | 8-15 | 45-60 | Limited mainly by how fast you want it done |
Start with the conservative end for your tier. If processing completes without errors, increase the batch size by 1-2 and test again. There’s no penalty for processing slowly — the queue is persistent and picks up where it left off.
The Shared Hosting Advantage
Here’s the counterintuitive part: StaticQ on shared hosting with Cloudflare Image Resizing can actually deliver a better end result than a self-hosted resize pipeline on a VPS. Your server does almost nothing — it queues the image and uploads the original to R2. Cloudflare’s edge network handles resize, format conversion, and delivery. The visitor gets images served from the nearest Cloudflare data center, not from your shared hosting server in a single location.
The processing takes longer — maybe a few hours instead of 30 minutes for a large library. But the end state is identical: every image resized, converted to WebP, stored in R2, and served from the edge. The queue just gets there at its own pace.