Capacity Planning

An overall view of resource consumption to plan resource upgrades

This dashboard presents historical CPU capacity and utilization for your SQL Server instances so you can spot trends, judge current load, and predict when additional resources will be needed. Use the KPIs, charts, and summary table to compare differently sized servers on a common scale, identify sustained or growing load, and prioritize upgrades, VM rightsizing, or consolidation.

CPU History

  • The CPU History section begins with KPIs that summarize the selected instances: total available cores, the average CPU utilization expressed on a per-core basis (normalized to one core), and the average number of cores effectively used. Normalizing lets you compare a small VM and a large host on the same footing — for example, 20% average CPU on a 4‑core host becomes 80% when scaled to a single-core equivalent (20 * 4).
  • “Cores used” converts that normalized percentage into an estimated count of cores in use (Avg CPU% * Total Server Cores / 100). These KPIs provide a quick sense of both intensity (per-core pressure) and absolute demand.

Charts and trends

  • SQL Server CPU Usage (normalized to 1 core) is a time-series view that scales each instance’s CPU to a single-core equivalent. Use it to compare intensity across instances and to see how load changes over time.
  • SQL Server Core Usage shows estimated cores in use over time. This chart helps you understand aggregate core demand, spot sustained increases, and evaluate capacity headroom.

Summary table

  • The table ties the charts to individual instances and provides:
    • SQL Instance name
    • Avg CPU Usage % over the selected interval
    • Total Server Cores assigned to the host
    • CPU Usage % (Normalized) = Avg CPU% * Total Server Cores
    • Cores Used (Normalized) = (Avg CPU% * Total Server Cores) / 100
  • Use the table to rank instances by CPU absolute usage, and to identify machines that would benefit from deeper investigation or redistribution of workload.

Interpreting results and next steps

  • Look for rising trends in normalized CPU% or sustained high cores-used values; these indicate growing pressure that should be addressed before performance degrades.
  • For sustained high per-core utilization, investigate top CPU queries, parallelism settings, or workload placement. For high aggregate core demand, consider adding cores, resizing VMs, or offloading noncritical workloads.
  • Correlate CPU trends with memory, I/O, and wait-type dashboards to form a complete capacity plan; allow headroom (commonly 20–30%) for growth and transient spikes unless autoscaling is available.

Controls and tips

  • Adjust the dashboard time range to reveal hourly, daily, or weekly trends depending on your planning horizon. Filter by environment (prod/test) or cluster (applications or domain) to focus analysis. Combine these views with query stats dashboards to identify root causes before changing capacity.

Data & Log Size

This section tracks historical data and log file growth so you can spot rapidly growing databases and plan storage capacity or maintenance.

Top: Data & Log KPIs

  • Initial Data Size: database data file size at the start of the selected interval (GB).
  • Latest Data Size: most recent data file size (GB).
  • Data Growth: growth between initial and latest sizes.
  • Initial Log Size: log file size at the start of the interval (GB).
  • Latest Log Size: most recent log file size (GB).
  • Log Growth: growth for the log file.

Charts

  • Data Size over time: time-series chart showing data file size changes for selected servers. Use it to detect steady growth or sudden jumps.
  • Log Size over time: time-series chart showing log file size trends

Summary table

  • Columns:
    • SQL Instance
    • Database
    • Initial Data Size
    • Latest Data Size
    • Data Growth
    • Initial Log Size
    • Latest Log Size
    • Log Growth
  • Use the table to rank databases by growth, find candidates for archive, compression, index maintenance, or retention policy changes, and to plan storage purchases or quota adjustments.

Interpretation and actions

  • Rapid, sustained data growth may indicate new workloads, retention changes, or missing cleanup jobs — investigate recent deployments and ETL processes.
  • Large or growing log files often point to long-running transactions, infrequent log backups (in full recovery), or heavy bulk operations — review backup and recovery settings and transaction patterns.
  • Use time-range filters to focus on growth windows (daily, weekly, monthly) and sort the table by growth to prioritize action.

Disk Usage

This section shows historical disk latency and throughput so you can spot I/O bottlenecks and capacity issues affecting database performance.

Top: Disk KPIs

  • Avg Read Latency: average read latency observed over the selected interval (ms).
  • Avg Read Bytes/sec: average read throughput (bytes/sec).
  • Avg Write Latency: average write latency observed over the selected interval (ms).
  • Avg Write Bytes/sec: average write throughput (bytes/sec).

Charts

  • Avg Read/Write Latency: time-series chart showing read and write latency over time. Use this to spot periods of elevated latency and correlate with changes and trends in the workload.
  • Total Throughput: time-series chart of Read Bytes/sec and Write Bytes/sec. Use this to identify sustained high throughput or bursts that may saturate the storage subsystem.

Disk Usage Summary table

  • Columns:
    • SQL Server Instance
    • Avg Read Latency
    • Read Bytes/sec
    • Avg Write Latency
    • Write Bytes/sec
  • The table summarizes latency and throughput per instance so you can rank servers by I/O pressure and prioritize investigation or remediation.

Interpretation and actions

  • Elevated read or write latency often points to storage contention, slow disks, or high queue depth; correlate with throughput and queue metrics.
  • High read throughput with low latency may indicate healthy caching; high throughput with rising latency suggests the cache is saturated or the storage tier is overloaded.
  • For write-heavy workloads, review transaction patterns, log placement, and backup frequency; consider faster storage or write-optimized tiers.
  • Use instance filters and time-range controls to isolate problematic windows and correlate with query, CPU, and I/O dashboards before changing hardware or storage tiers.

Memory Usage

This section shows memory demand and headroom so you can detect pressure on the buffer pool and plan memory changes or workload placement.

Top: Memory KPIs

  • Avg Allocated Memory: average memory allocated to SQL Server over the selected interval (MB or GB).
  • Max Allocated Memory: peak memory allocation observed during the interval.
  • Avg Target Memory: average target memory SQL Server attempted to obtain (based on internal heuristics / memory clerk targets).
  • Max Target Memory: peak target memory during the interval.

Charts

  • Memory usage over time: time-series chart showing Avg Allocated Memory, Max Allocated Memory, Avg Target Memory, and Max Target Memory. Use this to spot sustained allocation near target (indicating pressure) or sudden spikes in demand.

Summary table

  • Columns:
    • SQL Instance
    • Avg Allocated Memory
    • Max Allocated Memory
    • Avg Target Memory
    • Max Target Memory
  • The table provides a per-instance snapshot to help rank servers by memory consumption and to identify candidates for resizing or investigation.

Interpretation and actions

  • Large gaps between Target and Allocated Memory may indicate internal limits; investigate OS signals, memory clerks, and max memory configuration.
  • Correlate memory trends with CPU and I/O dashboards to determine whether adding memory will reduce I/O or improve overall performance before changing server sizing.
  • Use time-range filters to identify daily or weekly patterns and to size for peak demand with appropriate headroom.