cloud resume review chat contin. please see cloud resume review chat Jan 26 I'll search for our previous conversation about your Cloud Resume review. Relevant chats 4 results Great! I found our Cloud Resume review conversation from yesterday. I can see we discussed the architecture diagram structure you created with: Your Architecture Diagram Shows: Top: Backend systems (Visitor Counter API + Analytics System) Middle: CI/CD deployment (GitHub → Actions → OIDC → Terraform) Bottom: User flow (Route53 → CloudFront → S3 + API calls) We also created a LinkedIn post for you to use with the diagram: "Cloud Resume Challenge architecture - now fully managed with Terraform. Two backend Lambda functions: visitor counter API and daily analytics emails. OIDC authentication for GitHub Actions. Remote state with DynamoDB locking. Zero stored credentials. Production-grade Infrastructure as Code. stratajen.net #AWS #Terraform #CloudArchitecture #InfrastructureAsCode" What did you want to continue working on from that conversation? Are you ready to: Post that architecture diagram post? Work on something else from the Cloud Resume? Get feedback on the diagram itself? i hate to say this but emojis look gen ai so we should edit them in my blogs... 3:37 PM You're absolutely right - emojis in professional tech content do scream "AI-generated" and can make your posts look less authentic. For your LinkedIn posts, let's remove emojis entirely: Revised post (no emojis): "Cloud Resume Challenge architecture - now fully managed with Terraform. Two backend Lambda functions: visitor counter API and daily analytics emails. OIDC authentication for GitHub Actions. Remote state with DynamoDB locking. Zero stored credentials. Production-grade Infrastructure as Code. stratajen.net #AWS #Terraform #CloudArchitecture #InfrastructureAsCode" Moving forward for ALL your LinkedIn posts: No emojis No bullet points with emojis No "🚀" or "✨" or "💡" Straight professional text Let the content and your projects speak for themselves This applies to any future posts we create too. Real engineers don't emoji-spam their technical content. Should I go back and edit any other posts we've created to remove emojis? Or are you good to just apply this rule going forward? what? no my blogs 3:39 PM Oh! You mean your Medium blog posts - not LinkedIn. You're right - emojis in Medium articles do look AI-generated. Real technical writers don't use emojis in their blog posts. For your Medium blog about the Cloud Resume Challenge: No emojis in headers No emojis in bullet points No "✅" or "🚀" or "💡" Clean, professional technical writing Let your screenshots, code snippets, and architecture diagram do the visual work Have you already published the Medium post, or are you still writing it? If you're still writing it, I can help you review it and strip out any AI-looking elements (emojis, overly formatted sections, etc.) to make it sound more like authentic technical writing. Building a Job Alert System While Unemployed: A Reality Check - Jen Magruder Building a Cloud Engineering Career While My Toddler Destroyed My Sleep Schedule - Jen Magruder Migrating to Terraform: Remote State & OIDC | Jen Magruder Migrating to Terraform: Remote State & OIDC | Jen Magruder
Back to Portfolio

Migrating My Cloud Resume to Terraform: Remote State, OIDC, and Zero Downtime

When I joined the Cloud Resume Challenge cohort in December 2025, the requirement was CLI instead of clicking through the AWS Console. I'd already built my cloud resume manually through the Console. I asked Andrew Brown if I could migrate my existing infrastructure to Terraform instead. He said yes.

Problem: I couldn't tear it all down and start over.

The Constraint

My cloud resume was live and working. S3 hosting, CloudFront CDN, Lambda visitor counter, DynamoDB, API Gateway, custom domain. Everything built manually in the Console.

But I was actively job searching. Recruiters were visiting it. My visitor counter was past 300. It was linked in LinkedIn posts and job applications.

Tearing it down meant broken links in applications, dead portfolio URLs, lost visitor data, and potential downtime.

So: import existing resources into Terraform. Zero downtime migration.

The Import Process

Importing AWS resources into Terraform sounds simple. Write the config to match existing infrastructure, run terraform import, done.

Reality was messier.

Problem 1: CloudFront Logging ACL

First attempt:

Error: Error putting S3 bucket ACL: InvalidArgument: S3 bucket does not enable ACL access
  status code: 400

CloudFront logging needed ACL permissions to write logs to my S3 bucket. But my security settings had IgnorePublicAcls = true. These two requirements fought each other.

Fix: Configure bucket ownership controls in Terraform first, then enable ACLs for CloudFront logging.

Lesson: AWS security defaults sometimes conflict with service requirements.

Problem 2: IAM Role Import Path

Importing my Lambda execution role failed. IAM role ARNs include a path like /service-role/, but terraform import expects only the role name.

Fix: Use just the role name in the import command, but include the full path in the Terraform config to match the existing resource.

Lesson: ARN paths and resource names are different things.

Problem 3: S3 ACL Ordering

After fixing CloudFront logs, I tried setting the ACL in Terraform and got another error. Modern S3 buckets disable ACLs by default. To enable them, you need bucket ownership controls configured first.

Fix: Use depends_on to make sure ownership controls apply before setting ACLs.

Lesson: S3 ACL configuration requires specific ordering.

Remote State: Why I Needed It

At this point, everything worked with local Terraform state. I could run terraform plan and terraform apply from my laptop.

Then someone in Discord mentioned OIDC and remote state. I started reading and realized: keeping state local wasn't great.

My PC had broken twice this year. Just got it back, and it's already broken again. I'm using my laptop now, but the thought of my Terraform state only existing on one machine that could break? Not ideal.

Remote state in S3 meant: encrypted storage, work from anywhere, won't lose everything if hardware fails.

Setting Up Remote State

Problem: I needed an S3 bucket to store state. But I was using Terraform to create S3 buckets. How do I create the bucket that stores the state for creating buckets?

Solution: Two steps. Create remote state infrastructure (S3 bucket and DynamoDB table) using local state. Then migrate local state to S3.

DynamoDB State Locking

S3 stores the state, but what prevents two processes from modifying infrastructure simultaneously?

DynamoDB state locking. When Terraform runs, it acquires a lock in DynamoDB. If another process tries to run, it sees the lock and waits. Prevents state corruption.

Cost: Essentially free. DynamoDB charges about $0.000001 per operation. For 10 Terraform runs per month, that's nothing.

OIDC: No More Stored Credentials

My GitHub Actions workflow used AWS access keys stored in GitHub Secrets. It worked fine. But the same Discord conversation mentioned OIDC as more secure.

Why OIDC

Access keys are long lived credentials that sit in GitHub Secrets. If GitHub got compromised, those keys would be exposed until I manually rotated them.

OIDC was different. No stored credentials. GitHub Actions requests temporary credentials for each workflow run. They expire after an hour. If something went wrong, credentials would be useless almost immediately.

Setup took longer than access keys, but worth it.

Repository Restriction

When I set up the OIDC provider in AWS, I created an IAM role that GitHub Actions could assume. Critical part: restrict it to ONLY my specific repository. Without that, any GitHub repository could potentially authenticate to my AWS account.

I also scoped IAM permissions tightly. The role can only affect resources in my cloud resume project. If GitHub Actions got compromised, it couldn't touch anything else. Limited blast radius.

What I Learned

Importing saved my live infrastructure. Destroying and recreating would have been cleaner, but my site was live and linked in job applications. Worth the extra complexity.

Remote state made things safer. My PC broke twice this year. Moving state to S3 meant I wouldn't lose everything if hardware failed. Plus encryption, state locking, and ability to work from any machine.

Discord taught me the important stuff. Both remote state and OIDC came from Discord conversations. Community helped me figure out what actually mattered.

OIDC was worth the setup time. Access keys would have been faster, but no long lived credentials in GitHub Secrets was the right call.

Scoping IAM permissions mattered. I could have used wildcard permissions and been done faster, but limiting to only cloud resume resources meant less damage if something went wrong.

What's Next

With the migration complete, I can showcase production grade infrastructure as code in job applications. Everything is version controlled, reviewable, and reproducible.

AWS Solutions Architect certification exam: January 16, 2026.

Update

I received Lieutenant Commander grade (really experienced) from the Cloud Resume Challenge cohort review. The migration from Console to Terraform with remote state and OIDC demonstrated production-level infrastructure as code skills.


Want to see the code? Check out my GitHub repository for the complete Terraform configuration.

Connect with me on LinkedIn if you're working on similar cloud projects.

Back to Portfolio
Hyper Icon