Cloudflare R2 and Laravel
When Cloudflare announced their new object storage R2, we were beyond excited to start utilizing the new service and start saving on egress fees. When we went to use R2 as a drop-in replacement to S3 in on of our Laravel applications, we discovered the hard way you cannot simply drop in our credentials into the standard S3 environment variables.
The are some key differences between the two services, but the one that matters is the lack of ACLs on objects. There are actually two options to solve this problem:
1. Use our Composer package
Get started by running composer require reusser/laravel-cloudflare-r2 then setting up a new disk in config/filesystems.php:
'r2' => [
'driver' => 'r2',
'key' => env('R2_ACCESS_KEY_ID'),
'secret' => env('R2_SECRET_ACCESS_KEY'),
'region' => env('R2_DEFAULT_REGION', 'us-east-1'),
'bucket' => env('R2_BUCKET'),
'url' => env('R2_URL'),
'endpoint' => env('R2_ENDPOINT', false),
'use_path_style_endpoint' => env('R2_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
],
Once that is in place, feel free to update your .env file with the appropriate variables. Don't forget to set your default disk: FILESYSTEM_DISK=r2
2. Change your S3 configuration
You can utilize the existing S3 filesystem entry in config/filesystems.php, but there's a hidden option called retain_visibility you can set to false to ensure ACLs are not set when copying objects:
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'retain_visibility' => false,
'throw' => true,
],
Which should you use? Entirely up to you, but we wouldn't mind the increase in download counts on our composer package.