Bucket Storage
Overview
Object storage provides scalable file and data storage for your Raindrop applications. Buckets store objects with unique keys, supporting everything from small configuration files to large media assets. Each object can include custom metadata and HTTP headers for complete control over how content is served.
Raindrop buckets handle all data types - text files, JSON documents, binary data, images, videos, and streams. The storage interface provides familiar operations like put
, get
, head
, delete
, and list
with built-in error handling and type safety. Objects are automatically versioned and include checksums for data integrity.
The bucket interface supports range requests for partial downloads, custom metadata for organizing content, and proper HTTP semantics for web integration. Use buckets whenever you need persistent storage for application data, user-generated content, or binary assets.
Prerequisites
- Basic understanding of object storage concepts (keys, values, metadata)
- Raindrop framework installed in your project
- Familiarity with TypeScript and async/await patterns
- Understanding of HTTP headers and content types for web integration
Configuration
Add a bucket to your Raindrop project by defining it in your application manifest:
application "demo-app" { bucket "file-storage" {}}
// After running `raindrop build generate`export interface Env { FILE_STORAGE: Bucket;}
The bucket name in your manifest (file-storage
) transforms to an uppercase environment variable (FILE_STORAGE
) that you access in your services and actors.
Access
Use bucket storage in your services through the environment interface. All bucket operations are asynchronous and return Promises:
export default class extends Service<Env> { async fetch(request: Request): Promise<Response> { // Store a simple text file await this.env.FILE_STORAGE.put("welcome.txt", "Hello, World!");
// Retrieve the file const file = await this.env.FILE_STORAGE.get("welcome.txt"); if (!file) { return new Response("File not found", { status: 404 }); }
const content = await file.text(); return new Response(content); }}
Core Concepts
Object Keys and Organization
Object keys serve as unique identifiers within a bucket. Keys can include forward slashes to create logical hierarchies, though buckets store objects flat:
// Logical file organization using key prefixesawait env.FILE_STORAGE.put("users/123/profile.jpg", imageData);await env.FILE_STORAGE.put("users/123/documents/resume.pdf", pdfData);await env.FILE_STORAGE.put("users/456/profile.jpg", otherImageData);
// List all objects for user 123const userFiles = await env.FILE_STORAGE.list({ prefix: "users/123/" });
Object Metadata and Versioning
Every stored object includes automatic metadata like size, upload timestamp, checksums, and a unique version identifier:
const result = await env.FILE_STORAGE.put("data.json", jsonData);console.log(result.key); // "data.json"console.log(result.version); // Unique version IDconsole.log(result.size); // Object size in bytesconsole.log(result.uploaded); // Upload timestampconsole.log(result.etag); // Entity tag for caching
Content Types and HTTP Integration
Objects support HTTP metadata for proper web integration, including content types, caching headers, and content disposition:
await env.FILE_STORAGE.put("report.pdf", pdfData, { httpMetadata: { contentType: "application/pdf", contentDisposition: "attachment; filename=report.pdf", cacheControl: "public, max-age=3600" }});
Core Interfaces
The Raindrop framework provides TypeScript interfaces that define the bucket storage API and data structures:
Bucket Interface
The main interface for bucket operations:
interface Bucket { /** Retrieves object metadata without downloading the object */ head(key: string): Promise<BucketObject | null>;
/** Retrieves an object */ get(key: string, options?: BucketGetOptions): Promise<BucketObjectBody | null>;
/** Stores an object */ put( key: string, value: ReadableStream | ArrayBuffer | ArrayBufferView | string | null | Blob, options?: BucketPutOptions, ): Promise<BucketObject>;
/** Deletes one or more objects */ delete(keys: string | string[]): Promise<void>;
/** Lists objects in the bucket */ list(options?: BucketListOptions): Promise<BucketObjects>;}
BucketObject Interface
Metadata for stored objects:
interface BucketObject { /** Object key/identifier */ readonly key: string; /** Unique version identifier */ readonly version: string; /** Object size in bytes */ readonly size: number; /** Entity tag for caching */ readonly etag: string; /** HTTP-formatted entity tag */ readonly httpEtag: string; /** Data integrity checksums */ readonly checksums: BucketChecksums; /** Upload timestamp */ readonly uploaded: Date; /** HTTP headers for web serving */ readonly httpMetadata?: BucketHTTPMetadata; /** Custom key-value metadata */ readonly customMetadata?: Record<string, string>; /** Range information for partial requests */ readonly range?: BucketRange; /** Storage class for cost optimization */ readonly storageClass: string; /** Helper to write HTTP metadata to response headers */ writeHttpMetadata(headers: Headers): void;}
BucketObjectBody Interface
Object data with content access methods:
interface BucketObjectBody extends BucketObject { /** Raw object data stream */ get body(): ReadableStream; /** Whether the body has been consumed */ get bodyUsed(): boolean; /** Convert to ArrayBuffer */ arrayBuffer(): Promise<ArrayBuffer>; /** Convert to string */ text(): Promise<string>; /** Parse as JSON */ json<T>(): Promise<T>; /** Convert to Blob */ blob(): Promise<Blob>;}
Configuration Interfaces
Options for bucket operations:
interface BucketPutOptions { /** HTTP headers for web serving */ httpMetadata?: BucketHTTPMetadata | Headers; /** Custom key-value metadata */ customMetadata?: Record<string, string>; /** MD5 checksum for data integrity */ md5?: ArrayBuffer | string; /** SHA-1 checksum */ sha1?: ArrayBuffer | string; /** SHA-256 checksum */ sha256?: ArrayBuffer | string; /** SHA-384 checksum */ sha384?: ArrayBuffer | string; /** SHA-512 checksum */ sha512?: ArrayBuffer | string; /** Storage class for cost optimization */ storageClass?: string;}
interface BucketGetOptions { /** Range specification for partial downloads */ range?: BucketRange | Headers;}
interface BucketListOptions { /** Maximum number of items to return */ limit?: number; /** Filter results to keys that begin with this prefix */ prefix?: string; /** Continuation token for paginated results */ cursor?: string; /** Character to group common prefixes by */ delimiter?: string; /** Return objects lexicographically after this key */ startAfter?: string;}
BucketObjects Interface
Results from list operations:
type BucketObjects = { /** Array of object metadata */ objects: BucketObject[]; /** Common prefixes when using delimiter */ delimitedPrefixes: string[];} & ( | { /** Whether more results exist */ truncated: true; /** Token for next page */ cursor: string; } | { /** All results returned */ truncated: false; });
Storage Methods
Put Operation
Store objects in the bucket with support for various data types and custom metadata:
Data Type Support
// Store plain textawait env.FILE_STORAGE.put("notes.txt", "Meeting notes here...");
// Store JSON dataconst userData = { id: 123, name: "Alice", active: true };await env.FILE_STORAGE.put("user.json", JSON.stringify(userData));
// Store with content typeawait env.FILE_STORAGE.put("config.json", jsonString, { httpMetadata: { contentType: "application/json" }});
// Store binary dataconst imageBuffer = new ArrayBuffer(2048);await env.FILE_STORAGE.put("image.jpg", imageBuffer, { httpMetadata: { contentType: "image/jpeg" }});
// Store from Uint8Arrayconst data = new Uint8Array([1, 2, 3, 4, 5]);await env.FILE_STORAGE.put("binary.dat", data);
// Store Blob objectsconst blob = new Blob(["content"], { type: "text/plain" });await env.FILE_STORAGE.put("blob.txt", blob);
// Store from ReadableStreamconst stream = new ReadableStream({ start(controller) { controller.enqueue(new TextEncoder().encode("chunk 1")); controller.enqueue(new TextEncoder().encode("chunk 2")); controller.close(); }});
await env.FILE_STORAGE.put("streamed.txt", stream);
Custom Metadata and Options
Add custom metadata, checksums, and HTTP headers to stored objects:
await env.FILE_STORAGE.put("report.pdf", pdfData, { // Custom key-value metadata customMetadata: { author: "Alice Johnson", department: "Engineering", version: "1.2" },
// HTTP headers for web serving httpMetadata: { contentType: "application/pdf", contentDisposition: "inline; filename=quarterly-report.pdf", cacheControl: "private, max-age=86400", contentEncoding: "gzip" },
// Data integrity checksums md5: "d41d8cd98f00b204e9800998ecf8427e", sha256: hashBuffer,
// Storage class for cost optimization storageClass: "STANDARD"});
Parameters:
key
(string): Unique object identifiervalue
: Object data (string, ArrayBuffer, ArrayBufferView, Blob, ReadableStream, or null)options
(optional): Configuration object with metadata and HTTP settings
Return Value:
- Returns
BucketObject
with metadata about the stored object - Includes generated version, size, timestamps, and checksums
Get Operation
Retrieve objects from storage with automatic type conversion and range support:
Basic Retrieval
// Get object body with content methodsconst file = await env.FILE_STORAGE.get("document.txt");if (file) { const text = await file.text(); // Convert to string const buffer = await file.arrayBuffer(); // Convert to ArrayBuffer const blob = await file.blob(); // Convert to Blob const data = await file.json<MyType>(); // Parse as JSON}
Range Requests
Download specific portions of large files using range requests:
// Download first 1024 bytesconst chunk = await env.FILE_STORAGE.get("large-file.bin", { range: { offset: 0, length: 1024 }});
// Download from offset to endconst tail = await env.FILE_STORAGE.get("log-file.txt", { range: { offset: 1000 }});
// Download last 500 bytesconst suffix = await env.FILE_STORAGE.get("data.csv", { range: { suffix: 500 }});
// Use standard HTTP Range headerconst headers = new Headers();headers.set('Range', 'bytes=0-1023');
const partial = await env.FILE_STORAGE.get("video.mp4", { range: headers});
Parameters:
key
(string): Object key to retrieveoptions.range
(optional): Range specification for partial downloads
Return Value:
- Returns
BucketObjectBody
with content and metadata if object exists - Returns
null
if object doesn’t exist - Includes
body
stream and conversion methods (text()
,json()
,arrayBuffer()
,blob()
)
Delete Operation
Remove one or multiple objects from the bucket:
Single Object Deletion
// Delete a single objectawait env.FILE_STORAGE.delete("old-file.txt");
// Deletion is idempotent - no error if file doesn't existawait env.FILE_STORAGE.delete("non-existent.txt");
Bulk Deletion
// Delete multiple objects in one operationawait env.FILE_STORAGE.delete([ "temp/file1.txt", "temp/file2.txt", "temp/file3.txt"]);
// Delete all objects with a prefixconst objects = await env.FILE_STORAGE.list({ prefix: "temp/" });const keysToDelete = objects.objects.map(obj => obj.key);if (keysToDelete.length > 0) { await env.FILE_STORAGE.delete(keysToDelete);}
Parameters:
keys
: Single key (string) or array of keys (string[]) to delete
Return Value:
- Returns
Promise<void>
- operation completes when all deletions finish - Never throws errors for non-existent keys
Metadata Methods
Head Operation
Check if an object exists and retrieve its metadata without downloading the content. This is useful for existence checks and cache validation:
const metadata = await env.FILE_STORAGE.head("large-video.mp4");if (metadata) { console.log(`File size: ${metadata.size} bytes`); console.log(`Last modified: ${metadata.uploaded}`); console.log(`Content type: ${metadata.httpMetadata?.contentType}`);} else { console.log("File does not exist");}
Parameters:
key
(string): Object key to check
Return Value:
- Returns
BucketObject
with metadata if object exists - Returns
null
if object doesn’t exist - Includes version, size, timestamps, checksums, and HTTP metadata
List Operation
Browse and search objects in the bucket with filtering and pagination support:
Basic Listing
// List all objectsconst allObjects = await env.FILE_STORAGE.list();console.log(`Found ${allObjects.objects.length} objects`);
allObjects.objects.forEach(obj => { console.log(`${obj.key} - ${obj.size} bytes - ${obj.uploaded}`);});
Prefix Filtering
// List objects by prefix (simulates directory structure)const userFiles = await env.FILE_STORAGE.list({ prefix: "users/123/" });const logFiles = await env.FILE_STORAGE.list({ prefix: "logs/2024/" });const images = await env.FILE_STORAGE.list({ prefix: "images/" });
Pagination
Handle large buckets with pagination to avoid memory issues:
let cursor: string | undefined;let allObjects: BucketObject[] = [];
do { const batch = await env.FILE_STORAGE.list({ limit: 100, cursor: cursor });
allObjects.push(...batch.objects); cursor = batch.list_complete ? undefined : batch.cursor;} while (cursor);
console.log(`Total objects: ${allObjects.length}`);
Advanced Filtering
// Combine prefix, limit, and other optionsconst recentLogs = await env.FILE_STORAGE.list({ prefix: "logs/", limit: 50, delimiter: "/", // Group by common prefixes startAfter: "logs/2024-01-01" // Start after specific key});
// Handle delimited prefixes (simulated directories)console.log("Log directories:", recentLogs.delimitedPrefixes);console.log("Log files:", recentLogs.objects);
Parameters:
options.limit
(number): Maximum objects to return (default varies by implementation)options.prefix
(string): Filter to keys starting with this prefixoptions.cursor
(string): Pagination token from previous calloptions.delimiter
(string): Character for grouping common prefixesoptions.startAfter
(string): Return objects lexicographically after this key
Return Value:
objects
: Array ofBucketObject
items with metadatadelimitedPrefixes
: Array of common prefixes when using delimiterlist_complete
: Boolean indicating if more results existcursor
: Token for next page (whenlist_complete
is false)
Error Handling Patterns
All bucket operations return null
for non-existent objects rather than throwing errors. Handle these patterns consistently:
// Check existence before operationsconst file = await env.FILE_STORAGE.get("document.pdf");if (!file) { throw new Error("Document not found");}
// Handle range request errorstry { const chunk = await env.FILE_STORAGE.get("large-file.bin", { range: { offset: 0, length: 1024 } }); if (!chunk) { return new Response("File not found", { status: 404 }); }} catch (error) { console.error("Range request failed:", error); return new Response("Range not satisfiable", { status: 416 });}
// Validate data before storageconst validateAndStore = async (key: string, data: unknown) => { if (!data) { throw new Error("Cannot store null or undefined data"); }
const result = await env.FILE_STORAGE.put(key, data); return result;};