Why I Use Architect (arc.codes) for AWS Development

Nikos Katsikanis - September 8, 2025

Nikos Katsikanis

I ship faster on AWS with Architect. The manifest stays readable, local dev is quick, deploys are one command, and the defaults match how I actually build apps. Here is what worked for our project and a few habits that kept production calm.

The gist

I like tools that get out of the way. Architect does that. It gives me a small, declarative manifest that maps to what I need on AWS: HTTP routes, scheduled jobs, static files, and DynamoDB tables and indexes. The sandbox boots fast, and deploys are predictable.

The manifest stays human

This is the sort of file I am happy to commit and read later:

@app
myapp

@http
/*
  method any
  src server

@aws
timeout 120
region us-east-2
profile myapp
runtime nodejs20.x

@plugins
architect/plugin-typescript
architect/plugin-lambda-invoker

@typescript
base-runtime nodejs20.x

@scheduled
unrenewed
  cron 0 0 * * ? *
  src app/cron/unrenewed
metricsAggregation
  cron 20 0 * * ? *
  src app/cron/metricsAggregation

@static

@tables
data
  pk *String
  sk **String

searchstring
  id *String
  field **String

@tables-indexes
data
  sk *String
  pk **String
  name reverseIndex

searchstring
  category *String
  term **String
  name searchIndex

No giant templates. No guessing where resources came from. The intent is clear, and code stays close to the cloud state.

Local feels like prod without the bill

arc sandbox starts a local HTTP API and DynamoDB shim in seconds. I can exercise routes, scheduled tasks, and table calls end to end. When it works, arc deploy pushes to AWS the same way each time.

Environments that do what I ask

arc env sets Lambda config per stage. In CI I run:

arc env --add --env staging SESSION_SECRET '...'
arc env --add --env staging S3_REGION 'us-east-1'
arc env --unset --env staging S3_ENDPOINT || true

Why I like it:

Small, sharp defaults

Deploys that fit in a tweet

The workflow file stays short. I keep repo secrets for sensitive values and repo vars for per-stage config, then map them with arc env and deploy:


- name: Set staging config
  if: github.ref == 'refs/heads/dev'
  run: |
    arc env --add --env staging S3_REGION '${{ vars.S3_REGION }}'
    arc env --add --env staging APP_SITE  '${{ vars.APP_SITE }}'

- name: Deploy to staging
  if: github.ref == 'refs/heads/dev'
  run: arc deploy --staging --prune

Hard lessons that paid off

Fits how I write apps

Remix for the app layer, Architect for the cloud layer, DynamoDB for data. Architect is a thin interface over AWS, not a new platform. The moving parts stay visible and near the code, which is why I keep it.

Why not another tool?

CDK and Terraform shine with large platforms, shared modules, and heavy multi-account setups. Serverless Framework is solid too. For this project I wanted the shortest path from route and table and job to a running app. Architect hit that point without hiding AWS behind a new abstraction to learn and maintain.

Closing thoughts

If your day to day looks like mine — HTTP endpoints, jobs, DynamoDB queries, and a client app — Architect is a strong default. The manifest is small, the sandbox is fast, and deploys stay boring in the best way.