s3

Docker Registry S3

#!/bin/bash

NAME=registry
IMAGE=registry:2
BUCKET=my-docker-registry-bucket
BUCKET_PATH=/docker/registry
AWS_KEY=********************
AWS_SECRET=********************
AWS_REGION=eu-west-1

# docker kill $NAME
# docker rm $NAME

docker run \
-e REGISTRY_STORAGE=s3 \
-e REGISTRY_STORAGE_S3_REGION=$AWS_REGION \
-e REGISTRY_STORAGE_S3_BUCKET=$BUCKET \
-e REGISTRY_STORAGE_S3_ACCESSKEY=$AWS_KEY \
-e REGISTRY_STORAGE_S3_SECRETKEY=$AWS_SECRET \
-d -p 80:5000 $IMAGE

# -e DEBUG=True \
# -e LOGLEVEL=debug \

run the script above and test with:


docker pull busybox
docker tag busybox localhost:80/test:1
docker push localhost:80/test:1

Kudos to http://romain.dorgueil.net/blog/en/docker/2014/12/21/docker-registry-amazon-s3-storage-backend.html and http://stackoverflow.com/questions/30177828/docker-registry2-0-overriding-configuration-options for the info

Static hosting with AWS S3 super quick micro howto :)

  • Create an s3 bucket using your fully qualified domain name (FQDN) as the bucket name
  • Upload your content to the s3 bucket.
  • If you have s3cmd configured you can use this command to upload in the bucket otherwise use some other S3 clients
s3cmd put /yourpath/yourdir s3://yourbucketname.yourdomain.com/ --recursive
  • Give Read permissions for every file you want to be public
s3cmd setacl --acl-public --recursive s3://yourbucketname.yourdomain.com/

or to lock down setting it private (in case of need)

s3cmd setacl --acl-private --recursive s3://yourbucketname.yourdomain.com/
  • Click your bucket, go on properties and select “Static Website Hosting”, then “enable website hosting”, complete the fields for the index.html and 404.html files (must have these files in your website tree if you want for example 404 to work properly)
  • Take a note about you url Endpoint, you’ll need this later
  • Go on AWS Route53
  • Create a record for yoursite.yourdomain.com
  • select “Alias” = yes
  • your S3 static site will appear in the list of possible selections, choose the one with the bucket name above

Wait a couple of minutes (sometimes one minute is enough!) and …

You’re done!

Enjoy you new S3 static website 🙂

Cheers

Fabio

s3 logfile log rotator for awstats

Simple bash script to download, clean and prepare S3 logs for awstats.

This is assuming you want to keep track of S3 downloads whose logs are kept in a separate bucket

Careful because the script will empty the log bucket!

https://github.com/fabiop/sysadm/blob/master/s3-log-rotator.sh


#!/bin/bash -e
DATE=`date +%Y-%m-%d`
S3CMD=/usr/bin/s3cmd
# S3 bucket name without the s3 url
# **** CAREFUL THIS BUCKET WILL BE EMPTIED ****
S3BUCKETNAME="my.s3.log.bucket.name"
LOGDIR=/var/log/s3/${S3BUCKETNAME}
# check if s3cmd is installed and configured
${S3CMD} ls > /dev/null || (echo 's3cmd needs to be installed and configured'; exit 1)
# create todays' logdir
mkdir -p ${LOGDIR}/${DATE}
# download todays' files
if ${S3CMD} sync –recursive s3://${S3BUCKETNAME}/ ${LOGDIR}/${DATE}
then
# cleanup todays' files in S3
${S3CMD} del s3://${S3BUCKETNAME}/*
fi
# concatenate today's files in one logfile
> ${LOGDIR}/${DATE}.log
# a placeholder file is needed to avoid the error of catting nothing if there are no logs
> ${LOGDIR}/${DATE}/placeholder
if cat ${LOGDIR}/${DATE}/* >> ${LOGDIR}/${DATE}.log
then
# remove all the local small logfiles and dir
rm /var/log/s3/${S3BUCKETNAME}/${DATE}/ -rf
fi
# rotate old logs
[ -f ${LOGDIR}/today.log ] && mv ${LOGDIR}/today.log ${LOGDIR}/yesterday.log
# put the log where awstats is supposed to find it
cp -av ${LOGDIR}/${DATE}.log ${LOGDIR}/today.log
# compress old logs
gzip -9f ${LOGDIR}/${DATE}.log

view raw

gistfile1.txt

hosted with ❤ by GitHub