Migrating My Cloud Resume to Terraform: Remote State, OIDC, and Zero Downtime
When I joined the Cloud Resume Challenge cohort in December 2025, the requirement was CLI instead of clicking through the AWS Console. I'd already built my cloud resume manually through the Console. I asked Andrew Brown if I could migrate my existing infrastructure to Terraform instead. He said yes.
Problem: I couldn't tear it all down and start over.
The Constraint
My cloud resume was live and working. S3 hosting, CloudFront CDN, Lambda visitor counter, DynamoDB, API Gateway, custom domain. Everything built manually in the Console.
But I was actively job searching. Recruiters were visiting it. My visitor counter was past 300. It was linked in LinkedIn posts and job applications.
Tearing it down meant broken links in applications, dead portfolio URLs, lost visitor data, and potential downtime.
So: import existing resources into Terraform. Zero downtime migration.
The Import Process
Importing AWS resources into Terraform sounds simple. Write the config to match existing infrastructure, run terraform import, done.
Reality was messier.
Problem 1: CloudFront Logging ACL
First attempt:
Error: Error putting S3 bucket ACL: InvalidArgument: S3 bucket does not enable ACL access
status code: 400
CloudFront logging needed ACL permissions to write logs to my S3 bucket. But my security settings had IgnorePublicAcls = true. These two requirements fought each other.
Fix: Configure bucket ownership controls in Terraform first, then enable ACLs for CloudFront logging.
Lesson: AWS security defaults sometimes conflict with service requirements.
Problem 2: IAM Role Import Path
Importing my Lambda execution role failed. IAM role ARNs include a path like /service-role/, but terraform import expects only the role name.
Fix: Use just the role name in the import command, but include the full path in the Terraform config to match the existing resource.
Lesson: ARN paths and resource names are different things.
Problem 3: S3 ACL Ordering
After fixing CloudFront logs, I tried setting the ACL in Terraform and got another error. Modern S3 buckets disable ACLs by default. To enable them, you need bucket ownership controls configured first.
Fix: Use depends_on to make sure ownership controls apply before setting ACLs.
Lesson: S3 ACL configuration requires specific ordering.
Remote State: Why I Needed It
At this point, everything worked with local Terraform state. I could run terraform plan and terraform apply from my laptop.
Then someone in Discord mentioned OIDC and remote state. I started reading and realized: keeping state local wasn't great.
My PC had broken twice this year. Just got it back, and it's already broken again. I'm using my laptop now, but the thought of my Terraform state only existing on one machine that could break? Not ideal.
Remote state in S3 meant: encrypted storage, work from anywhere, won't lose everything if hardware fails.
Setting Up Remote State
Problem: I needed an S3 bucket to store state. But I was using Terraform to create S3 buckets. How do I create the bucket that stores the state for creating buckets?
Solution: Two steps. Create remote state infrastructure (S3 bucket and DynamoDB table) using local state. Then migrate local state to S3.
DynamoDB State Locking
S3 stores the state, but what prevents two processes from modifying infrastructure simultaneously?
DynamoDB state locking. When Terraform runs, it acquires a lock in DynamoDB. If another process tries to run, it sees the lock and waits. Prevents state corruption.
Cost: Essentially free. DynamoDB charges about $0.000001 per operation. For 10 Terraform runs per month, that's nothing.
OIDC: No More Stored Credentials
My GitHub Actions workflow used AWS access keys stored in GitHub Secrets. It worked fine. But the same Discord conversation mentioned OIDC as more secure.
Why OIDC
Access keys are long lived credentials that sit in GitHub Secrets. If GitHub got compromised, those keys would be exposed until I manually rotated them.
OIDC was different. No stored credentials. GitHub Actions requests temporary credentials for each workflow run. They expire after an hour. If something went wrong, credentials would be useless almost immediately.
Setup took longer than access keys, but worth it.
Repository Restriction
When I set up the OIDC provider in AWS, I created an IAM role that GitHub Actions could assume. Critical part: restrict it to ONLY my specific repository. Without that, any GitHub repository could potentially authenticate to my AWS account.
I also scoped IAM permissions tightly. The role can only affect resources in my cloud resume project. If GitHub Actions got compromised, it couldn't touch anything else. Limited blast radius.
What I Learned
Importing saved my live infrastructure. Destroying and recreating would have been cleaner, but my site was live and linked in job applications. Worth the extra complexity.
Remote state made things safer. My PC broke twice this year. Moving state to S3 meant I wouldn't lose everything if hardware failed. Plus encryption, state locking, and ability to work from any machine.
Discord taught me the important stuff. Both remote state and OIDC came from Discord conversations. Community helped me figure out what actually mattered.
OIDC was worth the setup time. Access keys would have been faster, but no long lived credentials in GitHub Secrets was the right call.
Scoping IAM permissions mattered. I could have used wildcard permissions and been done faster, but limiting to only cloud resume resources meant less damage if something went wrong.
What's Next
With the migration complete, I can showcase production grade infrastructure as code in job applications. Everything is version controlled, reviewable, and reproducible.
AWS Solutions Architect certification exam: January 16, 2026.
Update
I received Lieutenant Commander grade (really experienced) from the Cloud Resume Challenge cohort review. The migration from Console to Terraform with remote state and OIDC demonstrated production-level infrastructure as code skills.
Want to see the code? Check out my GitHub repository for the complete Terraform configuration.
Connect with me on LinkedIn if you're working on similar cloud projects.