I use this kind of model early in projects to keep architecture debates grounded in real numbers. It saves the awkward budget surprises later. Cloud migration cost modeling should happen before architecture decisions are locked. If you wait until the end, you lose the ability to steer. A basic model is usually enough to compare options and avoid expensive surprises.
Build a baseline by exporting current billing data and mapping it to workloads. Identify the top cost drivers and whether they scale with traffic, data volume, or time. Track both average and peak usage. Many migrations fail to account for peak sizing, which can double the estimate.
Translate to the target environment. Map current services to equivalents in the target cloud and estimate unit costs. Include compute, storage, data transfer, and managed service pricing. Add a buffer for unknowns and expected growth. Document assumptions so the model can be updated when new data appears. If a number is a guess, call it a guess. That honesty makes later updates easier.
One-time costs matter. Include migration effort, tooling, and the cost of running two environments in parallel during validation. Parallel run time is often the largest hidden cost and should be listed explicitly. If there are licensing changes, include those as well. I would rather overestimate and be corrected than under-estimate and explain a surprise.
Present the model clearly. Use a simple table with current cost, target cost, and one-time cost. Add a short notes section that lists assumptions and risk factors. This makes it easier for stakeholders to approve the plan and for engineers to choose cost-aware designs. Review the model at key milestones. Update it after the inventory phase, after the first migration wave, and before final cutover. That cadence keeps it honest.
A cost model does not need to be perfect. It needs to be transparent enough to guide decisions while the design is still flexible. Look for cost drivers hidden in data transfer. Cross-region replication, analytics exports, and logging can add large, recurring fees. If you model only compute and storage, your estimate will be optimistic.
Use conservative assumptions for capacity and growth. If you expect 20 percent growth, model 30 percent. Migration projects often grow in scope and timeline, so a conservative model helps prevent later budget shocks. Tie cost decisions to architecture choices. If a managed database is too expensive, propose alternatives and document the trade-offs. A cost model is most useful when it informs design, not when it only reports numbers.
Example in practice: a three-tier app cost model
Imagine a SaaS app with a web tier, API tier, and a PostgreSQL database. Today it runs on three VMs and a managed database. Start with a simple table that lists the current monthly costs for compute, storage, and data transfer. Then map each part to the target cloud. For example, two medium instances for the API tier, a managed database with 2 vCPU and 8 GB RAM, and a CDN for static assets. Add a line for egress, because moving logs and backups can be a real cost.
Add usage drivers for each tier. For the API tier, the driver might be requests per minute and average CPU per request. For storage, the driver is total GB plus growth per month. This makes the model usable when traffic changes or when product asks for a higher retention period.
Sensitivity checks help. Run two or three scenarios. A conservative case assumes 30 percent traffic growth, a base case uses current averages, and a worst case uses peak traffic. If a single service drives most of the cost, call it out and provide an option, like switching to reserved capacity or using a smaller database class with read replicas.
Pitfalls to watch for
- Ignoring one-time costs like migration tooling, data backfills, and parallel run time.
- Assuming current utilization is a stable baseline even though peak traffic is 2x or 3x higher.
- Missing data transfer, especially cross-region replication and analytics exports.
- Forgetting operational costs like support tools, incident response, and compliance scanning.
- Treating the model as a one-off spreadsheet instead of a living artifact.
Quick checklist
- List current costs by service and map each to a target equivalent.
- Document usage drivers and growth assumptions per workload.
- Include peak traffic sizing and at least one stress scenario.
- Capture one-time costs and parallel run periods.
- Identify the top three cost drivers and alternatives for each.
- Review and update the model at each migration milestone.
Model structure and ownership: A cost model stays useful when it has an owner and a clear structure. Keep it short and opinionated. One sheet or page for inputs, one for calculations, and one for outputs. Each input should have a source, a date, and a confidence level. If a number is a guess, label it. When an estimate changes, update the date and explain why. This avoids silent drift and makes reviews faster.
Tie each line item to a workload owner. When the database line grows, the database owner can explain whether that is expected or a symptom of poor tuning. That accountability also helps when product or finance asks for changes.
Metrics to validate after migration: Once workloads move, compare the model to real costs. Track cost per request, cost per GB stored, and cost per active customer. If the actual numbers are higher, identify whether it is usage growth, pricing differences, or missing line items. Use those findings to adjust the model and prevent the same gaps in later phases.



