Zaher Ghaibeh
PHP Backend developer
I've experience in a few PHP Frameworks, such as Laravel, Lumen and Slim (The last two are used for building Microservices/API services).
Using Minio server for simulate S3 locally
Published at Monday, November 13, 2017

The other day, I was building a small development environment for me to test Laravel and S3, but as usual, I didn't want to use production credentials nor to use S3 directly, so after a small search I found Minio :

Minio is an open source object storage server with Amazon S3 compatible API. 
Build cloud-native applications portable across all major public and private clouds.

And I was so happy, as I'll be able to use S3 locally and wont worry about hitting outside my local network, I pulled the docker image, and then I realize that I'll have to specify the command to run and some environment variables which okay for local development, but it wont work with Bitbucket Pipelines, so my tests wont work.

you can read this tweet and the list of tweets under it.

So what I did was to build a new docker image that will have everything up and running, the content of the Dockerfile was

FROM minio/minio

ENV MINIO_ACCESS_KEY minio
ENV MINIO_SECRET_KEY miniostorage

EXPOSE 9000

CMD "server","/data"

which is simple and nice, but again wont be that much helpful, as I need to have two default buckets created automatically for me, so I altered the file to create a new directories under the /data directory.

FROM minio/minio

ENV MINIO_ACCESS_KEY minio
ENV MINIO_SECRET_KEY miniostorage

RUN mkdir -p /data/test && mkdir -p /data/develop

EXPOSE 9000

CMD "server","/data"

and that was nice, so now whenever we run the image it will always have those two buckets, but what about the policy? how can I add a default one. after some search I found out that each bucket will have a policy saved under .minio.sys/buckets directory

so I created the following policy, which will make the bucket public

{
    "Version": "2012-10-17",
    "Statement": [{
        "Action": ["s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads"],
        "Effect": "Allow",
        "Principal": {
            "AWS": ["*"]
        },
        "Resource": ["arn:aws:s3:::<BUCKETNAME>"],
        "Sid": ""
    }, {
        "Action": [
            "s3:AbortMultipartUpload", 
            "s3:DeleteObject", 
            "s3:GetObject", 
            "s3:ListMultipartUploadParts", 
            "s3:PutObject"
        ],
        "Effect": "Allow",
        "Principal": {
            "AWS": ["*"]
        },
        "Resource": ["arn:aws:s3:::<BUCKETNAME>/*"],
        "Sid": ""
    }]
}

PS: remember to change  with the bucket name you have created, I have two one for develop and one for test.

and then updated the Dockerfile to ADD those two files to image and the final result is:

FROM minio/minio

ENV MINIO_ACCESS_KEY minio
ENV MINIO_SECRET_KEY miniostorage

RUN mkdir -p /data/test && mkdir -p /data/develop

ADD ./config/develop/policy.json /data/.minio.sys/buckets/develop/policy.json
ADD ./config/test/policy.json /data/.minio.sys/buckets/test/policy.json

EXPOSE 9000

CMD "server","/data"

So now whenever I run the docker image I'll always get two public bucket ready for me to work with.