Build an AI-Powered Photo App with Figma Make + Raindrop
Build an AI-powered photo application using two AI development tools: Figma Make for rapid frontend development and Raindrop for backend infrastructure. This tutorial shows you how to create a photo sharing app where users can upload pictures and search them using natural language powered by SmartBuckets.
This approach lets you design visually, generate production-ready code, and deploy a complete backend.
The Development Flow
This tutorial follows a five-phase approach:
- Visual Design: Create the AI-powered photo app interface in Figma Make
- Frontend Deployment: Deploy your application directly from Figma Make
- API Specification: Generate OpenAPI spec from your design
- Backend Implementation: Use Raindrop MCP to build the smart backend with SmartBuckets
- Integration: Connect deployed frontend to backend and test photo upload and search
Part 1: Visual Design with Figma Make
We’ll start by building the frontend using Figma Make to create the photo app interface.
Step 1.1: Create Your Figma Make Project
- Go to Figma Make
- In the chat interface, paste this detailed prompt to design your app:
AI Photo App Layout
Header Section- App title "AI Photos" with a camera/photo icon- Smart search bar with placeholder "Search photos: 'kids at beach', 'birthday party', 'Christmas 2023'..."- User avatar with photo stats (total photos, recent uploads)- Upload button (prominent, easy to find)
Main Content Area (Dual Layout)
Photo Library Panel (Left/Main Area)- Photo grid/masonry layout with: * Thumbnail images with hover effects * Upload date overlay * Selection checkboxes for batch operations * Lightbox view on click- Upload zone (drag & drop area) when empty or at top- Sorting options (date, name, relevance)- View toggle (grid, list, timeline)
Smart Search & Filters (Right Sidebar)- Natural language search input with examples: * "Show me photos from last summer" * "Find pictures with grandma" * "Christmas photos from 2022" * "Kids playing in the park"- Search results with highlighting and relevance scores- Quick filters: * Date ranges (This week, Last month, This year) * People detected in photos * Locations/events * Photo types (portraits, landscapes, group photos)- Search history dropdown- Recent searches suggestions
Components Needed- Photo card/thumbnail component- Lightbox/modal for full-size viewing- Upload component with drag & drop- Search result cards with snippets- Filter chips and date pickers- Loading states for uploads and searches- Empty states for no photos/no results- Progress indicators for upload
Design Style- Clean, photo-focused interface with white/light backgrounds- Consistent color scheme (primary: warm blue, accent: soft green)- Responsive design optimized for photo viewing- Smooth animations for photo loading and transitions- Mobile-first approach for easy photo sharing- Accessibility features for all users
Step 1.2: Refine the Design
After Figma Make generates your initial design, continue the conversation in the chat interface to refine as needed. You might want to add features like photo albums, sharing capabilities, or enhanced mobile photo viewing. Make sure all interactive elements are clearly defined and the data flow between components is logical.
Part 2: Frontend Deployment
Once you’re happy with your design, click the “Publish” button in the top right corner of Figma Make. Your photo app will be deployed and live at the provided URL with mock data and placeholder functionality.
Part 3: API Specification Generation
In Figma Make’s chat interface, ask it to create an OpenAPI spec YAML file that includes all required API endpoints to make this app function. Once generated, you can find the YAML file in the “Code” tab in the top middle of the screen - look for something like photo-app-api-spec.yml
. Copy the entire content of that file for use in the next step.
Part 4: Backend Implementation with Raindrop
Now we’ll use your AI coding assistant with Raindrop MCP to build the backend with SmartBuckets for photo storage and search.
Raindrop automatically continues through the development workflow, only pausing when it needs user input for approvals like the Product Requirements Document (PRD) review and code structure confirmation.
Step 4.1: Start the Raindrop Workflow
Open your AI coding assistant (Claude Code or Gemini CLI) and use the /new-raindrop-app
command to start a new Raindrop app development cycle. When the AI coding assistant asks what you want to build, paste this prompt:
I want to build an AI-powered photo application backend using Raindrop. Here's my OpenAPI specification:
[Paste your api-spec.yaml content here]
The key requirements are:- Store photos in [SmartBuckets](/concepts/smartbuckets) for content storage- Enable natural language search of photos using [SmartBucket](/concepts/smartbuckets) search capabilities- Users should be able to ask questions like: - "Find photos from Christmas 2023" - "Show me pictures with the kids at the beach" - "Find group photos from birthday parties"
Step 4.2: PRD Development Process
Your AI coding assistant will analyze your API spec and ask clarifying questions about the photo app business logic. Answer questions as needed, the more info you can provide the better the PRD will be.
The PRD that Raindrop creates will be detailed and contain specifications that the coding assistant needs to implement your app correctly. It’s important that you carefully review the business logic before accepting it. Once you’ve reviewed the PRD, simply say “Approved” and your AI coding assistant will continue with the code step in the Raindrop development workflow. For details on the complete workflow, see the Claude Code + Raindrop MCP guide.
Step 4.3: Code Structure Review
In the code step, your AI coding assistant will create all the required files and add detailed comments in each file explaining what needs to be implemented. These comments guide the sub-agents that will build the actual code to ensure they implement the correct functionality.
The AI coding assistant will ask for your approval before proceeding. As the user, you should review all the generated files and verify that the business logic documented in the comments is correct. Once you’re satisfied with the code structure and comments, approve the next step where the AI coding assistant will build, deploy, test, improve, and finally deliver your API.
It might take a few loops for your coding assistant and the Raindrop MCP to build and resolve all bugs and get to a working backend. This is completely normal.
Part 5: Integration and Testing
Now we’ll connect your Figma Make frontend to the deployed backend.
Step 5.1: Get Updated API Specification and Connect Frontend
First ask your AI coding assistant to generate an updated OpenAPI specification:
In your AI coding assistant (Claude Code or Gemini CLI):
Please generate an updated OpenAPI specification that reflects the actual API endpoints that were built and deployed. This will ensure I have the correct API documentation for frontend integration.
Copy the updated OpenAPI specification, then switch back to Figma Make’s chat interface to connect your frontend:
In Figma Make’s chat interface:
Now I need to update my deployed frontend to connect to the real backend.
Here's my deployed backend URL: [your-raindrop-backend-url]Here's the updated OpenAPI spec: [paste the updated spec]
Please help me:
1. Update the deployed frontend to replace all mock data and placeholder API calls with real endpoints2. Add proper error handling and loading states for uploads and searches3. Set up authentication flow for user access4. Test photo upload and natural language search end-to-end
Make sure to preserve all the UI functionality while connecting to the live SmartBucket-powered backend.
Conclusion
You’ve now built a complete AI-powered photo application using visual design tools and automated backend generation. Your app allows users to upload photos and search them using natural language queries powered by SmartBuckets.
Test your application by uploading photos and trying searches like “Show me beach photos” or “Find pictures with grandparents” to see how the semantic search understands content rather than just filenames.
This workflow demonstrates how modern AI development tools can take you from visual design to deployed application without traditional coding workflows.