Although my experience with Amazon AWS has mostly been with 100+ EC2 VPC instances, elastic load balancers, and multi-zoned RDS extra large instances - it does not have to be such an expensive endeavor to get the scale and reliability of AWS.
He started with Wordpress because the intent was to be more dynamic however that was not the case (his updates were more than often in facebook than on the static web site). The site is mainly so he can show clients his awesome ski, snowboarding and fly fishing expertise in the Telluride area. Also, the shared Wordpress site was taking up to 11 seconds to deliver the HTML page (not cool for a website in 2013).
Amazon recently introduced the ability to serve static HTML directly from S3. The process involved dumping the Wordpress site to HTML/CSS/JS/images and then uploading it S3. We first tried a few plugins like really static (however because the Wordpress site was so slow - this was taking forever and some required extensions were not on the hosted server). So we used a much more crude approach (aka command line):
mkdir pancho
cd pancho
wget -k -K -E -r -l 10 -p -N -F --restrict-file-names=windows -nH http://www.panchowinter.com
This then dumped all the html, images and supporting files to a 'pancho' folder and then we s3sync'ed them up to Amazon:
cd ..
s3sync -p -r pancho/ www.panchowinter.com:
(-r for recursive and -p to make them public).
To get cloudfront working, I created a distribution pointed to his site and then did a search and replace to serve any static content from the cloudfront distribution.
We set the s3 bucket www.panchowinter.com for static upload and index.html as the index document:
Then created a panchowinter.com bucket that forwarded to www.panchowinter.com:
We also moved the DNS to Route 53 and pointed alias record to the hosted S3 buckets:
and
The end result was page load times not in seconds (2.72):
But rather now in miliseconds (122):
Also thanks to cloudfront the supporting files are often less than 100ms.