Django Quickstart
Learn how to integrate Rabata.io object storage with your Django applications using django-storages.
Introduction
This guide will help you integrate Rabata.io with your Django application using django-storages, a collection of custom storage backends for Django.
Since Rabata.io is S3-compatible, we’ll use the S3 storage backend from django-storages to seamlessly store and serve static and media files.
Prerequisites
- Django project (3.2+)
- Rabata.io account with access keys
- A bucket created in your Rabata.io account
Installation
First, install the required packages:
$ pip install django-storages boto3
Add the storages app to your INSTALLED_APPS in your Django settings:
# settings.py
INSTALLED_APPS = [
# ... other apps
'storages',
]
Configuration
Configure your Django project to use Rabata.io as the storage backend by adding the following to your settings.py file:
# settings.py
# S3 Storage Configuration
AWS_ACCESS_KEY_ID = 'YOUR_RABATA_ACCESS_KEY'
AWS_SECRET_ACCESS_KEY = 'YOUR_RABATA_SECRET_KEY'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_ENDPOINT_URL = 'https://s3.eu-west-1.rabata.io'
AWS_S3_REGION_NAME = 'eu-west-1' # Default region
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.rcs.rabata.io'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_DEFAULT_ACL = 'public-read' # Adjust as needed for your security requirements
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_S3_FILE_OVERWRITE = False
AWS_S3_VERIFY = True
Environment Variables
For better security, store your credentials as environment variables:
# .env file
RABATA_ACCESS_KEY=YOUR_RABATA_ACCESS_KEY
RABATA_SECRET_KEY=YOUR_RABATA_SECRET_KEY
RABATA_BUCKET_NAME=your-bucket-name
Then in your settings.py:
# settings.py
import os
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# S3 Storage Configuration
AWS_ACCESS_KEY_ID = os.environ.get('RABATA_ACCESS_KEY')
AWS_SECRET_ACCESS_KEY = os.environ.get('RABATA_SECRET_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('RABATA_BUCKET_NAME')
AWS_S3_ENDPOINT_URL = 'https://s3.eu-west-1.rabata.io'
# ... rest of the configuration
Security Note: Never commit your credentials to version control. Use environment variables or Django’s secrets management.
Static Files Configuration
To store your static files (CSS, JavaScript, images) on Rabata.io:
# settings.py
# Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
STATICFILES_DIRS = [BASE_DIR / 'static']
# Use S3 for static files
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
With this configuration, when you run python manage.py collectstatic, Django will upload your static files to your Rabata.io bucket.
Media Files Configuration
To store user-uploaded media files on Rabata.io, create a custom storage class:
# storage.py
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
location = 'media' # Store files under 'media/' prefix
file_overwrite = False # Don't overwrite files with the same name
Then update your settings:
# settings.py
MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR / 'media'
# Use S3 for media files
DEFAULT_FILE_STORAGE = 'your_app.storage.MediaStorage'
Now, when users upload files through your Django forms or admin interface, the files will be stored on Rabata.io.
Usage Examples
Model with File Field
# models.py
from django.db import models
class Document(models.Model):
name = models.CharField(max_length=100)
document_file = models.FileField(upload_to='documents/')
uploaded_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
Form for File Upload
# forms.py
from django import forms
from .models import Document
class DocumentForm(forms.ModelForm):
class Meta:
model = Document
fields = ('name', 'document_file')
View for File Upload
# views.py
from django.shortcuts import render, redirect
from django.views import View
from .forms import DocumentForm
class DocumentUploadView(View):
def get(self, request):
form = DocumentForm()
return render(request, 'upload.html', {'form': form})
def post(self, request):
form = DocumentForm(request.POST, request.FILES)
if form.is_valid():
document = form.save()
return redirect('document_detail', pk=document.pk)
return render(request, 'upload.html', {'form': form})
Template for File Upload
<!-- upload.html -->
{% extends 'base.html' %}
{% block content %}
<h2>Upload Document</h2>
<form method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Upload</button>
</form>
{% endblock %}
Advanced Usage
Generating Presigned URLs
For private files that require temporary access:
# utils.py
import boto3
from django.conf import settings
def generate_presigned_url(object_key, expiration=3600):
"""Generate a presigned URL for an object in S3."""
s3_client = boto3.client(
's3',
endpoint_url=settings.AWS_S3_ENDPOINT_URL,
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
region_name=settings.AWS_S3_REGION_NAME
)
url = s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': settings.AWS_STORAGE_BUCKET_NAME,
'Key': object_key
},
ExpiresIn=expiration
)
return url
Usage in a view:
# views.py
from django.http import HttpResponseRedirect
from .utils import generate_presigned_url
def download_private_file(request, document_id):
document = Document.objects.get(id=document_id)
# Get the object key from the file field
object_key = document.document_file.name
# Generate a presigned URL
url = generate_presigned_url(object_key)
# Redirect to the presigned URL
return HttpResponseRedirect(url)
Custom File Naming
To customize how files are named when uploaded:
# models.py
import uuid
def document_file_path(instance, filename):
"""Generate a unique file path for the uploaded document."""
# Get the file extension
ext = filename.split('.')[-1]
# Generate a unique filename with UUID
filename = f"{uuid.uuid4()}.{ext}"
# Return the complete path
return f"documents/{filename}"
class Document(models.Model):
name = models.CharField(max_length=100)
document_file = models.FileField(upload_to=document_file_path)
uploaded_at = models.DateTimeField(auto_now_add=True)
CORS Configuration
If you’re using direct uploads or accessing files from a different domain, you’ll need to configure CORS on your Rabata.io bucket:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"https://yourdomain.com"
],
"ExposeHeaders": [
"ETag",
"Content-Length",
"Content-Type"
],
"MaxAgeSeconds": 3600
}
]
Troubleshooting
Common Issues
- Access Denied Errors: Check your bucket permissions and ensure your access keys have the correct permissions.
- Files Not Showing Up: Verify your CORS configuration and ensure your bucket policy allows public access if needed.
- Static Files Not Loading: Run
python manage.py collectstatic --noinputto upload static files to Rabata.io. - URL Issues: Ensure your
AWS_S3_CUSTOM_DOMAINis correctly configured.
Debugging
To debug S3 operations, enable boto3 logging:
# settings.py
import logging
# Set up logging for boto3
logging.getLogger('boto3').setLevel(logging.DEBUG)
logging.getLogger('botocore').setLevel(logging.DEBUG)
logging.getLogger('s3transfer').setLevel(logging.DEBUG)
Production Considerations
Performance Optimization
- Use a CDN in front of Rabata.io for faster content delivery
- Enable file compression for text-based static files
- Implement proper caching headers
Security Best Practices
- Use environment variables for credentials
- Set appropriate bucket policies
- Use private ACLs for sensitive files and generate presigned URLs when needed
- Validate file types and sizes before uploading
Backup Strategy
Implement a backup strategy for your media files:
# Example script to backup media files
import boto3
import datetime
def backup_bucket():
s3_client = boto3.client(
's3',
endpoint_url='https://s3.eu-west-1.rabata.io',
aws_access_key_id='YOUR_RABATA_ACCESS_KEY',
aws_secret_access_key='YOUR_RABATA_SECRET_KEY',
region_name='eu-west-1'
)
# Source and destination buckets
source_bucket = 'your-production-bucket'
dest_bucket = 'your-backup-bucket'
# Get all objects from the source bucket
paginator = s3_client.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=source_bucket)
# Copy each object to the destination bucket with a timestamp prefix
timestamp = datetime.datetime.now().strftime('%Y-%m-%d')
for page in pages:
for obj in page.get('Contents', []):
copy_source = {'Bucket': source_bucket, 'Key': obj['Key']}
dest_key = f"{timestamp}/{obj['Key']}"
s3_client.copy_object(
CopySource=copy_source,
Bucket=dest_bucket,
Key=dest_key
)
print(f"Copied {obj['Key']} to {dest_key}")