“Thunderstruck” by AC/DC 🎵
It’s done. Ker-chunk.com is now archived as a permanent static S3 website originally built in WordPress. The credits live on even if the games don’t. It will cost me cents. The migration & re-architecture is complete.
Duration: 6 hours total
Status: No outages
Complexity: Easy
Timeline: 2 Weekends
Mood: So much trash left in that old account.
Lessons Learned:
- Simply Static is amazing and really easy. It is more secure if you want to de-couple hosting the frontend from the CMS. If you are in camp “WordPress is not secure!!” this is for you.
- The frontend is now highly available and durable to 11 9s – and nuking WordPress was my choice, but you could also just use this model with WordPress without end users knowing about WordPress.
- You can get the names back for S3 buckets but there’s no good reason to
- AWS Resource Explorer will truly index everything you have ever owned or never meant to make in AWS including things you forgot…
- Domain transfer between AWS accounts is instant (less than 1 minute)
- Setup your hosted zone before you transfer a domain. If they are both in AWS you can avoid outages during the name server change step
- We should value cleanup more
When we estimate we are both way off and also accurate.
I estimated for the messing around I did, that wasn’t included in the actual migration and the knowledge that I would find more work to do unrelated to the migration. If I had chosen to do none of that, the actual steps were 6 hours on two different weekends, but I did do just that. I messed around. Here’s what I learned…
What’s In an (S3 Bucket) Name?
“Messing around” included but was not limited to deleting entire S3 buckets to see if I could get back their names in another account and also finding buckets I forgot.
You cannot transfer a bucket as is, only the data in it despite what this incorrect post implies. The post is actually giving list, get, put, delete objects permissions and implies similar use cases to what you would do with S3 Transfer Acceleration or AWS DataSync. Don’t use that post if what you are trying to do is get the whole bucket including its name. If you know S3, then you know that bucket names are globally unique and today you cannot transfer a bucket to keep the name, only the objects inside of it into a new bucket with a new name.
Trying to keep the name wasn’t necessary as I was going to point CloudFront to new ones and if you think it is necessary chances are it’s not.
AWS doesn’t recommend you delete the bucket and re-grab it in an attempt to keep the unique name because it cannot guarantee you can get the name back. I ignored this for science. 🙂
I pontificated “What if I want that exact name?”. I found out if you try to delete your bucket to get back the name, you have to wait 1 to 1.5 hours to make the API call to create a bucket or you will get warnings that another operation is still happening. Your previous bucket, while visually deleted from the console and list operations, is not actually finished being scrubbed on the side of AWS. This ate up a few hours on its own so I did other things and kept checking in. I did get both my bucket names back in a brand new account eventually.
I thought “Oh Cool.” finding out something you can only learn by trying that in retrospect is probably useless information. While waiting, I also turned on AWS Resource Explorer to look for orphaned resources for fun. I found orphaned IGWs, CloudFront distributions, and to my horror, multiple buckets I forgot about storing many WebGL builds, and entire old websites that had been statically hosted…
After 2 hours it still had not finished indexing…I logged back into the account today and seeing it up in the hundreds between default resources mixed with things like old Facebook policies we made I cringed a little.
And finally, what always adds to your “messing around” estimate: Getting creative with the AWS CLI out of hope for things you wish it did.
Route 53 A Love Song
While performing the migration, I realized it would be supremely unfun to transfer a domain with 50+ records as the instructions tell you, FOR EACH RECORD…”add an Action
and a ResourceRecordSet
element.’ The AWS CLI command for aws route53 list-resource-records-sets
outputs them as ResourceRecordSets. Which means you have to do this…
Instead of being able to pass ResourceRecordSets to aws route53 change-resource-record-sets
and the AWS CLI understand what you wanted to do…
For anyone reading this and thinking “But Molly – it’s clear in the instructions.” Yes. I know. I don’t think it should be that way. People say this to women when they write things not realizing they are pointing out an opportunity to make our lives easier.
Obviously, I did what the instructions said. I think about how not simple it is when the use case is worse. I wanted to take list-resource-record-sets and instead, show an example that takes the output but preps it for change-resource-record-sets based on the records you have for me instead of me having to do it myself.
Route 53 could cut out steps – this is a hopefully feature request: To further automate Steps 4, 5, & 6 and let users check the final change in the new account before it applies to the hosted zone part of an acceptance chain (and grab the output file(s) to do final tweaks if removing any records).
While my use case was easy, anything over 50+ records (people with enterprise use cases) requires you to sit down, open this all up in Visual Studio Code, create a git repo to make sure you didn’t mess it up, and when you f*ck it up, be prepared and then do Step … 3 (4,5,6) as part of … Step 7. Be sure you create your hosted zone and records first, then, last change name servers on the domain. Order of operations and not screwing up these records matters if you don’t want an outage.
By the miracle known as “I had few records” my code was surprisingly perfect in the diff check and thus because Route 53 handles all of the hard parts under the hood with regards to name servers if you follow the rules, I had no outages on any service – not the site, not email, nothing for ker-chunk.com. The only records that should have changed were the name servers and the SOA records and they did.
I’ve mentioned before: Route 53 is my favorite AWS service. It’s simple, it’s easy. It was the first one I learned deeply because it was fueled by the vengeance of an underestimated woman trying to make a point to GoDaddy.
Pay Down Time
My use case was to archive an account and website, but for others it could be to migrate from an acquired account into a new account that has cookie cutter settings for a larger business. You could also discover in owning an account that it has too many resources in it and you need to bust it up for security. This might make you migrate large record sets into another account. Did you know you can have 10,000 records per hosted zone (and request higher)? I guess someone needed that.
Often doing something small first shows where the pain really will be for repetition. For example, in learning Simply Static I realized that converting the site into static pages devoid of WordPress wasn’t the hard part. It’s, if I had wanted to keep WordPress, setting up all the version control and CICD pipelines with local WordPress and decoupling the frontend – then validating that workflow was good end-to-end. It was immediately clear this is why that’s where SimplyCDN charged their business and the basic functionality of Simply Static is free.
In any case, what I learned in this tiny re-architecture was something I already knew.
Even when the migration is easy, you will find clean up. Maybe an engineer in another department gets that account who really likes cleaning things up because they know the value.
That engineer cleans up the hundreds of thousands of what we all left together. They go around person to person – asking them to find the time. Please just find time. Find time to learn the history. Find time to tell me the journey of these resources so I know deleting them is okay.
This migration and re-architecture was the easiest I’ve ever done. Realizing all the mess I left alongside my former team though? That made me feel guilty.
That mess reminded me all the mess we all leave and how much time, if we aren’t the person who knows the history of that mess, it takes from other teams to understand what is and is not okay to archive, delete, and move on from.
It confirmed in me that those who absorb these problems are required to learn so much rich history about companies even when there are defined compliance boundaries – That In their jobs they are the bards. Their automation scripts are stories where you can see the thousands orphaned security groups, tags, snapshots, files they’ve sifted through to find answers and prevent outages or security vulnerabilities. You can see the training applied to them as they absorbed the fragmentation across choices of others.
You can see that what they wanted to learn to move forward may have been defeated by, instead, what they have to learn because of the past choices of others who moved forward instead.
The universe continues to be ironic and blesses with opportunity to learn those new things anyway, often disguised as the act of letting go. You only have to look for the Uno Reverse and know where to apply the things you didn’t get to learn.