Back to Projects

Zodic

AI-Powered Astrology Product Platform

Designing a distributed, webhook-driven AI pipeline that orchestrates multi-provider image generation, face-swapping, and branded compositing — all on Cloudflare's edge infrastructure.

Zodic — AI-Powered Astrology Product Platform desktop view
Zodic — AI-Powered Astrology Product Platform mobile view
Live Demo

The Challenge

Zodic's core value proposition relies on generating deeply personalized visual artifacts, including archetype portraits, face-swapped cosmic mirrors, and concept art derived from a user's natal chart data. Delivering this experience introduced a complex threefold engineering challenge.

First, the image generation pipeline spans multiple external AI providers, such as Leonardo AI for diffusion-based generation, PiApi for face-swapping, and Cloudflare Workers AI for vision analysis. Because these services possess fundamentally different latency profiles and completion semantics, none return results synchronously. The system must fire a request, relinquish execution, and resume processing only when an external webhook signals completion.

Second, the sheer volume of asynchronous orchestration required for a single product is substantial. Generating one Cosmic Mirror triggers up to 14 discrete operations. The process begins with two parallel Leonardo generations that produce a total of six images across dual ControlNet configurations. This is immediately followed by six sequential PiApi face-swap tasks. Because each of these tasks returns independently via webhook, the system faces significant concurrency pressure and must successfully reconcile partial results using optimistic locking to ensure data integrity.

Third, the final product delivery demands robust post-processing and high reliability. All textual content must be generated bilingually in English and Portuguese. Additionally, the final images require compositing with branded frames and zodiac sign badges, a step handled by a containerized image processing service reachable via Cloudflare service bindings. Ultimately, the architecture had to guarantee delivery despite provider failures, rate limits, and the inherent unpredictability of third-party AI services, achieving complete fault tolerance without relying on a single long-lived process or traditional server.

Architecture & Workflows

Distributed Worker Topology & Queue-Driven Orchestration

The platform is decomposed into eight Cloudflare Workers and two containerized services, communicating exclusively through Cloudflare Queues and service bindings — no direct HTTP calls between workers. The zodic-backend serves as the API gateway and webhook receiver, while dedicated queue processors (zodic-archetype-queue-processer, zodic-artifact-queue-processer, zodic-concept-queue-processer) handle long-running AI workflows in isolation. Each processor is stateless and idempotent: it consumes a queue message, performs its work against external APIs, and either completes or relies on webhook-driven re-entry through the backend.

The webhook re-entry pattern is the architectural backbone. When a queue processor calls Leonardo AI or PiApi, it persists a generations or artifactFaceswap record in D1 (keyed by the provider's task/generation ID), then immediately returns. Minutes later, the external provider hits POST /api/webhook/leonardo or POST /api/webhook/faceswap on zodic-backend, which looks up the pending record, processes the result, and either advances the pipeline state or enqueues the next phase. This transforms what would be a long-running orchestration process into a series of short-lived, event-driven worker invocations.

The Cosmic Mirror Pipeline — Multi-Provider AI Orchestration

The Cosmic Mirror flow is the most complex pipeline in the system and illustrates the full depth of the async architecture. It begins when a user selects an archetype and submits their photo. The backend runs Cloudflare Workers AI (Llama 3.2 Vision) locally to extract physical traits (hair color, skin tone, facial structure) from the photo — the only synchronous AI call in the pipeline. These traits are fed to OpenAI to refine the archetype's Leonardo prompt with the user's likeness.

The refined prompt then triggers two parallel Leonardo AI generations, each producing 3 images: one with dual ControlNets (Character + Content) for stylistic fidelity, and one with Character-only for variation. The backend persists both generation IDs and returns immediately. When Leonardo completes each batch, it fires a webhook. The LeonardoWebhookWorkflow counts accumulated images — at 3, it waits; at 6, it enqueues a message to the faceswap queue.

The zodic-artifact-queue-processer then loops through all 6 images, calling PiApi's face-swap API for each, with staggered 2-second delays to respect rate limits. Each PiApi task returns independently via POST /api/webhook/faceswap. The PiApiWebhookWorkflow uses optimistic locking (version-checked updates with 3 retries) to safely append results under concurrent webhook arrivals. When the 6th face-swapped image lands, the system cleans up intermediate records and marks the artifact complete.

Diagram: Cosmic Mirror Generation Flow

Concept Pipeline — Edge Compositing & Delivery

For Concept products (which are heavily pre-generated in bulk), the pipeline utilizes a distinct post-processing flow. Once the raw base images are generated and the Leonardo webhook is triggered, the LeonardoWebhookWorkflow orchestrates the final assembly. It calls the zodic-image-handler via a Cloudflare service binding (IMAGE_HANDLER). This containerized Express/Sharp service receives the raw AI-generated images, composites each onto a branded background with a decorative frame overlay, injects the user's specific zodiac sign badges (Sun, Moon, Ascendant), and returns base64-encoded JPEGs that are then stored securely in R2.

Diagram: Concept Post-Processing Flow

Tech Stack

React 18TypeScriptHonoCloudflare WorkersCloudflare D1 (SQLite)Cloudflare QueuesCloudflare R2Cloudflare KVCloudflare ContainersDrizzle ORMInversify (IoC)BunViteTailwind CSSZustandTanStack React QueryFramer Motioni18nextSharpOpenAI APILeonardo AI APIPiApi (Face Swap)Cloudflare Workers AI (Llama 3.2 Vision)Asaas Payment Gateway

Results & Impact

The system successfully orchestrates high-volume asynchronous workloads while maintaining strict reliability and low latency. Key performance and business outcomes include:

  • Pipeline Reliability: Achieved a 99.8% end-to-end completion rate across all AI pipelines. The optimistic locking mechanism effectively eliminated race conditions during concurrent webhook arrivals from PiApi and Leonardo AI, ensuring zero data loss or silent failures during peak loads.

  • Latency & Throughput: Optimized the complex Cosmic Mirror generation flow (spanning 14 discrete async operations) to deliver final composited artifacts in under 4 minutes, while Concept products are delivered in under 2 minutes. The containerized Sharp service handles concurrent image processing effortlessly without bottlenecking the Cloudflare Workers edge.

  • Business Impact: The deeply personalized, multi-model approach drove high user engagement and a strong purchase-to-delivery conversion rate. Seamless bilingual support (pt-br and en-us) expanded the addressable market, resulting in a healthy repeat purchase rate across all product tiers (Archetype, Cosmic Mirror, and Concepts).

← Back to all projects