This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Core Tasks

Step-by-step guides for common QMonitor tasks

This section provides step-by-step instructions for setting up and using QMonitor.

You will learn how to:

  • Install and configure an Agent
  • Register SQL Server instances
  • Monitor instance health and performance
  • Set up alerts and manage issues
  • Use dashboards to analyze data

Follow these guides to get the most out of QMonitor.

1 - Sign Up

Register your email address in QMonitor

Go to the login page located at https://portal.qmonitorapp.com and click on the “Register” button.

You will be taken to the registration form, where you can enter your email and your chosen password. The password needs to be 20 characters at least. Using a password manager to generate and store your password is strongly advised.

We use a captcha validation system to protect our systems from bots. In case you didn’t pass the captcha validation, a message appears on the form. Please try entering your credentials again, maybe clicking around a bit more to differentiate from what a bot would do.

After you submit the form, our servers will process your registration request and send a confirmation email to the address that you entered. The email will contain a link that you can click to verify your email.

Please allow a couple of minutes for the email to reach your inbox. In case you don’t get the email in a couple of minutes, please check your spam folder: we do our best to avoid ending up in your spam, but sometimes it just happens. If you still the confirmation email is nowhere to be found, you can request a new confirmation email by visiting the “Resend email confirmation” page at https://portal.qmonitorapp.com/Identity/Account/ResendEmailConfirmation

Once your email is verified, you can proceed to log in to QMonitor using the credentials that you provided.

2 - Log In

Enter your credentials to access QMonitor

Follow these steps to log in to QMonitor:

  1. Enter your email address
  2. Enter your password
  3. (Optional) Check “Remember Me?” to stay signed in after closing your browser

Two-Factor Authentication (2FA)

If you have enabled 2FA for your account:

  • You will see a prompt to enter a code
  • Open your authenticator app
  • Enter the code shown in the app

You can enable or disable 2FA on the Account page.

Forgot Your Password?

If you forgot your password:

  1. Click “Forgot your Password?” on the login page
  2. Check your email for a password reset link
  3. Follow the link to create a new password

We strongly suggest the use of a password manager.

3 - Set up an organization

Create an organization to start collecting data from your instances

When you log in to QMonitor for the first time you are met with a welcome page that will ask you if you want to create a new organization or join an existing one. In order to monitor and maintain SQL Server instances you will need to be part of an organization.

Create a new organization

When you create a new organization, you will be simply asked to provide a name for it. The name that you enter will be “sanitized” by QMonitor, in order to remove any special characters in the name that might make it hard to work with. The name that you entered will still be used as it is as the display name of you organization, but the internal name of the organization will be the sanitized one. QMonitor will also generate a code called “Organization Key”, that you will have to provide during the setup of a new agent. Please note down carefully both the organization name and the organization key, as you will need them later. For this purpose, the use of a password manager is highly recommended. Before you proceed, you will have to confirm that your codes are successfully stored in a safe location, by clicking the corresponding checkbox on the form.

Another important piece of information required to create your organization is the location of the data: you can choose one of the available regions (EU and US for the time being, more coming in the near future). Please choose the region that matches your needs, not only in terms of latency and bandwidth, but also in terms of regulatory constraints. While we do not store any sensitive data, your company might be subject to specific policies that prescribe to store all the data inside the boundaries of a particular region. Please check with your CISO to make sure that you choose the correct option. There is currently no option to move your organization from one region to another.

When you are ready, click the “Create new Organization” button. Creating an organization requires around 20 seconds: please wait until the process completes. You will be met with a welcome message and a quick help dialog window that provides information on how to register your instances and start monitoring them.

4 - Join an Existing Organization

Get an invite to an Organization and use it to gain access

When you log in to QMonitor for the first time, you see a welcome page. This page asks if you want to create a new Organization or join an existing one. You must be part of an Organization to monitor and manage SQL Server instances.

Join an Organization

To join an existing Organization, you need an invitation from the Organization owner.

For Organization Owners: How to Invite New Users

  1. Go to the Members page in the Organization settings
  2. Enter the email addresses of the users you want to invite
  3. Send the invitations

Important: New users must register for QMonitor before they can join your Organization. They should:

  • Create a QMonitor account first
  • Then accept your invitation to join the Organization

This is how your coworkers gain access to the Organizations you create.

5 - Working with Agents

What are agents and what you can do with them

This section will help you set up QMonitor to accomplish the most common tasks, such as installing an agent, registering an instance and start the data collection.

5.1 - Create an Agent

Create an agent to start collecting metrics

An agent is the service that takes care of collecting metrics from your instances and uploads them to our servers in the cloud. Depending on your infrastructure, you will need one or more agents, each in charge of a different set of instances. In general, an agent can collect metrics from any number of instances, as long as it is capable of contacting them, so you usually have one agent per each data center.

You can create, rename or delete agents from the Instances page, with the buttons at the top or next to the name of the agent. Every organization always contains a Default agent, ready to be installed and used. Next to the name of each agent there is a status icon that indicates whether the agent is running or not. Click on the status icon to display additional information about the agent: the name of the host it is running on, the service account and the version of the software.

Before you can add your SQL Server instances to QMonitor, you will have to install and configure at least one agent and then you will be able to register and add your instances to that agent.

5.2 - Installation Options

Understand the different possible installation options

You do not need to install the agent on each instance that you want to monitor: it is not recommended to install an agent on the same machine of a monitored instance. On the other hand, while a dedicated machine specifically for the QMonitor agent is ideal, it is not a requirement. The agent itself is not a resource-intensive application and does not require vast amounts of CPU, RAM or disk.

The QMonitor agent runs on Windows and Linux and can also be run as a container: you are free to choose the installation target that fits your needs.

5.3 - Network Setup

Configure your network to allow traffic from the QMonitor agent

Before you install an agent, please review the network requirements and make sure that your setup fulfills them.

The QMonitor agent needs to connect to your SQL Server instances, so make sure that the machine that hosts the server has the appropriate network access to the TCP/IP ports the instance is listening on. This is usually port 1433 for default instances, but the default port can be changed and assigned statically or dynamically. Please check your instance network configuration to identify which ports to open on your servers.

The agent will also need to upload all the metrics it collects to our servers in the cloud: please make sure that the machine where the agent runs can connect to the SSL port (443) on gateway.qmonitorapp.com. Please make sure that you configure your firewall to allow the connection to the host name rather than the IP address, as we use multiple servers in our gateway and your agent might be assigned to different IP addresses at different times of the day.

The agent periodically checks for updates by querying our servers for the latest available version of the software. When a new version is detected, your agent will have to download and install it from our software distribution network. In order to allow this process to complete successfully, please make sure that you configure your network to allow connections to the host static.qmonitorapp.com, again on port 443. The same advice stands for this host as well: please use the host name rather than the IP address when configuring your network, as we use a CDN network that might serve the contents from various source IP addresses.

On windows, you can test whether your agent machine is configured correctly by running this powershell script:

'gateway.qmonitorapp.com', 'static.qmonitorapp.com' | 
    Test-NetConnection -Port 443 | 
    Select ComputerName, TcpTestSucceeded

What you want to see is the following result:

ComputerName            TcpTestSucceeded
------------            ----------------
gateway.qmonitorapp.com             True
static.qmonitorapp.com              True

On Linux you can test using bash:

timeout 1 bash -c '</dev/tcp/gateway.qmonitorapp.com/443 && echo Port is open || echo Port is closed' || echo Connection timeout
timeout 1 bash -c '</dev/tcp/static.qmonitorapp.com/443 && echo Port is open || echo Port is closed' || echo Connection timeout

The output should be “Port is open” for both hosts.

If you see a different output, please investigate any connectivity issues with your network team.

5.4 - Agent Components

Agent components and log files

The QMonitor agent is comprised of multiple components, each writing to its own log file.

  • QMonitor agent - this is the main component and the entry point of the background service. It takes care of connecting to the QMonitor servers, retrieve the configuration and start the data collection accordingly. The executable is called Quantumdatis.QMonitor.Agent.exe and it is the one invoked by the windows service.

This component is located inside the QMonitor.Agent folder under the installation folder of QMonitor (usually c:\program files\QMonitor\QMonitor.Agent). The logs are located in the logs subfolder and may contain useful information to troubleshoot your agent setup. The logfiles follow the naming pattern <organization>_<agent>-log-<timestamp>.txt

  • Telegraf - this is the data collection agent, which connects to the SQL Server instances and runs the data collection queries. It also caches the metrics locally and uploads them regularly to our gateway in the cloud.

This component is located directly in the QMonitor installation folder and the executable is called telegraf.exe. The logs are found under the logs folder and follow the naming pattern telegraf_<organization>_<agent>.log

  • XeSmartTarget - This is a component that takes care of streaming the events from the monitored instances to our gateway in the cloud. The executable (xesmarttarget.exe) is found in the main installation path of QMonitor and its logs are found in the corresponding logs subfolder, with the naming pattern xesmarttarget_<organization>_<agent>.log

  • Autoupdater - This component ensures that your QMonitor agent stays always up to date. It runs in the background to query our servers for newer versions of the software and it downloads and runs the setup in case a new version is found. Two executables are involved: autoupdater.exe and updaterkickstarter.exe, both found in the main installation folder of QMonitor. The logs can be found in the logs directory.

5.5 - Installing the Agent

Installation steps for the QMonitor Agent

To start the QMonitor agent installation, download the setup kit for your operating system (Windows or Linux), copy it to the target machine, and run it. Running the agent in a container does not require installation steps and is described in “Running the QMonitor agent in a container”. The installation copies files to the chosen installation directory and requires no user input except for that directory. Additional configuration is required to authenticate the agent to your organization and to run it as a background service.

Setting up the agent on Windows

On Windows we provide two tools to configure the agent. ConfigWizard is a GUI for users who prefer a visual flow. ConfigWizardCmd is a CLI tool offering the same functionality in a scriptable, repeatable form.

Using ConfigWizard

Open the Start menu, type QMonitor, and launch the QMonitor ConfigWizard. ConfigWizard displays the organizations configured on the machine in a dropdown at the top of the window.

To add an organization, click the “+” button and enter the organization name and the organization key you obtained when creating the org.

If you lost the organization key, regenerate it from the Settings page: https://portal.qmonitorapp.com/settings. Warning: regenerating the org key invalidates the old key and causes all existing agents to stop working until they are reconfigured with the new key.

Note: the machine-safe organization name may differ from the display name. You can copy the machine-safe name from the Settings page if needed.

After the server validates the org name and key, agent names appear in the agents dropdown and you can configure agent services on this machine. The UI shows whether an agent is installed and the service account in use.

To install an agent, click the service account link, enter service credentials, and press “Install”. This creates a Windows service configured to start automatically. By default the service runs as NT AUTHORITY\Network Service.

Using ConfigWizardCmd

ConfigWizardCmd provides the same capabilities as the GUI in a CLI form that is suitable for automation. See the ConfigWizardCmd reference for parameter details and examples.

For instance, you can use a similar syntax to install an agent service:

ConfigWizardCmd.exe install --org your_organization_name --key your_organization_key --agent your_agent_name

Additional considerations

The service account is important because it is used to authenticate the agent to SQL Server instances when using Integrated Security (Windows auth). Using Integrated Security is recommended as it avoids storing passwords.

If the service runs as NT AUTHORITY\Network Service it authenticates on the network using the computer account (DOMAIN\ComputerName$). If the SQL Server instance is local to the agent host, authentication uses the Network Service account on that machine.

Windows may show a localized name for the Network Service account (for example, “NT AUTHORITY\Servizio di Rete”). To get the localized account name, run this PowerShell command:

(New-Object System.Security.Principal.SecurityIdentifier "S-1-5-20").Translate([System.Security.Principal.NTAccount])

Example output from an Italian system:

Value
-----
NT AUTHORITY\SERVIZIO DI RETE

When ConfigWizard or ConfigWizardCmd complete setup, the agent starts and begins collecting data for associated SQL Server instances. The agent also contacts QMonitor servers to report its state. On the Instances page click the green/red icon next to an agent to view its state, service account, and agent version.

Setting up the agent on Linux

ConfigWizard is not available on Linux; configure the agent using ConfigWizardCmd. See the CLI documentation for required parameters and examples.

On Linux, the agent cannot impersonate Windows users, so Integrated Security is not available. Entra (Managed Identity) authentication may be available if the Linux VM or container supports a managed identity.

Running the QMonitor agent in a container

The QMonitor agent runs easily in a container. Configure the container with a few environment variables that provide secrets and configuration options.

An example command line for docker looks like this:

docker run -d -e OrganizationName=your_organization_name -e OrganizationKey=your_organization_key -e AgentName=your_agent_name qmonitor/agent:latest

6 - Register a SQL Server Instance

Add a SQL Server Instance to QMonitor

When your agent is installed, it appears on the Instances page with a “running” status. You are now ready to register a SQL Server instance.

Click the “New Instance” button at the top. You will be taken to a page where you can enter the details for your instance.

Connection String

The most important information is the connection string that the agent uses to contact and query your SQL Server instance. Click the edit button next to the connection string to open a dialog that helps you enter all required information. If you already have a complete connection string, you can paste it in the dialog.

  • Instance name: The name of the SQL Server instance. For default instances, this is the server name where SQL Server is running. For named instances, use the format server\instance. The host name you enter must be resolvable by DNS on the machine where the agent runs. Make sure name resolution works correctly. Use fully qualified domain names (FQDN) if required by your network setup.
  • Port: (Optional) Enter the port number if your SQL Server instance is not running on port 1433 and the instance name cannot be resolved to a TCP port by the SQL Server Browser service. You can leave this field blank most of the time.
  • Authentication: QMonitor supports three authentication methods:
    • SQL Server Authentication: Uses the username and password you enter in the form. Your credentials are part of the connection string and stored encrypted in our database.
    • Active Directory - Integrated (Windows Authentication): The easiest and safest option. The agent contacts the SQL Server instance using the Windows service account it runs under. No passwords need to be entered or stored.
    • Active Directory - Managed Identity: Uses an Azure Managed Identity to connect to SQL Server instances. This is also a safe option for running the QMonitor agent on an Azure VM or in an Azure Container App with a User-Assigned Managed Identity. See the documentation for Azure VMs and Azure Container Apps to learn about configuring a Managed Identity for your services.
  • Additional connection parameters: Enter any connection string properties that cannot be entered in a specific field. The connection string format must comply with the .NET connection string format (property=value). For a complete list of properties and values, see the .NET documentation.
    If you have an existing connection string to paste, enter it here. It will be parsed and all properties will be automatically placed in the corresponding text fields.

When your connection string is ready, click the Verify button on the right. A dialog window will appear with the validation results.

Many aspects are checked at this stage:

  • Can Connect: Can the agent connect to the instance? If not, the error message is displayed in this window.
  • XE Session: QMonitor uses Extended Events to capture meaningful events from the server, such as deadlocks, blocking events, and errors. For this to work, you need to create an Extended Events session called QMonitor that captures these events. See the “Set up your SQL Server instance” section for more information.
  • Is Sysadmin: QMonitor can work without sysadmin role membership. However, sysadmin permissions ensure that the agent has access to all the DMVs it will query. It also ensures that the agent and its components, such as the Extended Events session, stay up to date. Using a sysadmin login also allows QMonitor to execute scheduled jobs that may interact with the instance.
    If you use a login without sysadmin permissions, you are responsible for granting all required permissions. QMonitor provides a setup script for this purpose. See the “Set up your SQL Server Instance” section for a detailed breakdown of the script and how to use it to prepare your instance for monitoring.
  • Permissions: In this section, you can check whether the QMonitor agent has access to all required DMVs and system tables for monitoring.
    • Sysschedules: Read access required to monitor SQL Server Agent jobs
    • Sysjobschedules: Read access required to monitor SQL Server Agent jobs
    • Syscategories: Read access required to monitor SQL Server Agent jobs
    • Sysjobs: Read access required to monitor SQL Server Agent jobs
    • Sysalerts: Read access required to monitor SQL Server Agent jobs
    • SysmailConfiguration: Read access required to monitor SQL Server Agent jobs
    • Syssessions: Read access required to monitor SQL Server Agent jobs
    • Sysjobactivity: Read access required to monitor SQL Server Agent jobs
    • Sysjobhistory: Read access required to monitor SQL Server Agent jobs
    • AgentDatetime: Read access required to monitor SQL Server Agent jobs
    • CalculateAvailableSpace: This is a scalar function created by QMonitor in the master database to calculate available space on database files. This shows as Ok if the function exists and the agent has permissions to invoke it.
    • ConnectAnyDatabase: This server-level permission allows the QMonitor agent to connect to all user databases in the instance to query database-specific DMVs. This permission does not grant access to user tables inside databases.
    • ViewServerState: This server-level permission allows QMonitor to query many DMVs to inspect the instance state.
    • ViewAnyDefinition: This server-level permission controls access to object definitions in all databases. Does not grant permissions to read data in user databases.

Other instance information

  • Name: This field is read-only. It contains the name that the instance returns when you query the @@SERVERNAME property. You cannot change this name or use a network alias like a CNAME record in your DNS. However, you can use an alias in the connection string.
  • Acknowledge to use sysadmin rights: When the QMonitor agent connects using a sysadmin login, you will be prompted to confirm that this is acceptable. The use of a high-privileged login will be under your responsibility. This is especially important for QMonitor jobs, which will not run against this instance unless you check this box. You can also acknowledge your consent to use sysadmin permissions at the organization level, setting the default for new instances in the Manage Organization section.
  • Engine Edition: This field is also read-only. It contains the engine edition, as returned by SERVERPROPERTY('EngineEdition').
  • Edition: Read-only. Contains the edition of this SQL Server instance, as returned by SERVERPROPERTY('Edition'). See the page for SERVERPROPERTY for more information.
  • Description: Enter a description of your SQL Server instance in this field. Use a meaningful description that helps you search the list of instances and document what the instance is used for.
  • Tags: You can add tags to your instance to organize and categorize it. Add as many tags as you want by clicking the “New Tag” button and typing the text for the tag. Remove existing tags by clicking the “X” button on the tag itself. Tags help document your instance and can change how Issues are created on the instance, overriding default behavior for specific tags. See the Issues section to learn how tags work.
  • Group: Instances can be added to one group, which can be part of another group. Access the groups page from the Instances page to create a tree of groups to categorize your instances.
  • Agent: Use the drop-down list to assign your instance to one agent. Changing the agent after adding the instance requires a new validation process. During validation, the agent will verify it can contact the instance and query the required monitoring DMVs.
  • Enabled: When this box is checked, QMonitor will monitor the instance. When unchecked, no metrics will be collected.
  • Obfuscate SQL Text: Check this box if you want the text of all SQL commands processed to remove all constants that may contain sensitive data. For example, an application might run commands like this:
    INSERT INTO Customers (id, name) VALUES (1,'Quantumdatis')
    
    This SQL text could reveal that Quantumdatis is your customer. By obfuscating the SQL text, the command is captured like this:
    INSERT INTO Customers (id, name) VALUES (1,'<value>') 
    

Considerations for Always On availability groups

When you register a SQL Server instance in QMonitor, make sure you are not adding an Always On listener. QMonitor will not allow you to add a listener. You must add the instance where the AG is defined instead. To check the state of the AG with the HA dashboard, we strongly recommend adding all the nodes in the AG setup.

7 - Set up your SQL Server instance

Steps required to prepare the inatance for monitoring

QMonitor agents are monitoring your instances by running queries against multiple DMVs and system tables at regular intervals. Before you can add an instance to QMonitor, you will need to set up the instance in order to grant the required permissions and create additional objects such as the Extended Events session “QMonitor”.

The read access to those DMVs is granted either through sysadmin role membership or through grants on the individual objects.

QMonitor does not strictly require sysadmin role membership to collect the bare minimum information to populate the dashboards with performance metrics: if you decide not to grant sysadmin role membership to the QMonitor agent login, you can still monitor your instances by granting permissions on the individual DMVs and system tables. Additional permissions may be required to perform daily checks or to execute QMonitor jobs.

QMonitor offers a setup script that you can download from the Instances page: you can load the script in Management Studio to review the actions it performs and provide the parameters to set up the instance correctly.

At the very top of the script, you can provide the values for three required variables:

  • @LoginName: name of the login used by the QMonitor agent to connect to the SQL Server instance. This can be a Windows login, a SQL Server login or an Azure Managed Identity, depending on your setup. If the login is not present, it will be created.
  • @Password: please enter here the value of the password to use to authenticate SQL Server logins. If you leave this variable empty, it will be interpreted as indicating a Windows login or a Managed Identity. If you want to use an existing SQL Server login, you can enter any value for this variable and it will be ignored.
  • @Sysadmin: set this variable to ‘Y’ in order to grant sysadmin server role membership to the login indicated in the @LoginName variable. This is the easiest option, which requires the least maintenance on your side. If you want to avoid granting sysadmin role membership, set this variable to ‘N’ and the remainder of the script will take care of granting all the permissions required to collect performance metrics.

Once you provided values for all the three variables you can execute the script and review the results. If any error is returned from SQL Server, please review it, correct the cause and execute once again.

When the script executes successfully, the instance is ready to be added to QMonitor and the verification dialog will display all the checks as “Ok”. The check for sysadmin privileges may still display as “Ko” if you decided not to grant sysadmin role membership: you will still be able to proceed with the registration of the instance.

8 - Manage Instances

Steps required to prepare the instance for monitoring

The Instances page allows you to list, create, edit and delete registered SQL Server instances.

At the top you have buttons to perform several actions:

  • New Agent: Opens a dialog to create a new agent
  • New Instance: Takes you to the page to register a new SQL Server instance. For a detailed description of the registration process, see Register a SQL Server instance
  • Groups: Takes you to a page where you can create, edit, and delete groups. Groups and subgroups help you categorize your SQL Server instances, display them together in the list, and control settings and exceptions for issues.
    Groups are available to all agents (not specific to a single agent).
  • Queries: Takes you to the page to view, create, edit, and delete custom queries. You can assign queries to one or more agents. The QMonitor agent runs your custom queries and uploads the data to the measurement you define in the query. You can then view the data in the Custom Metrics dashboard. Currently, data is displayed as a table. Additional visualizations are coming soon.
  • Export / Import Excel: Use the upload and download buttons to download a list of the instances you have registered in QMonitor. This is useful for sharing with colleagues or using as input for projects.
    The upload button lets you register multiple SQL Server instances at once using an Excel file. This is helpful when you have many instances and don’t want to register them one by one. The import file should use the same columns as the export file. We recommend using the exported Excel file as a template.
  • Download Client: Takes you to the downloads page where you can download the QMonitor client for your platform.
  • Setup Script: Downloads a script that sets up your instances for monitoring. The script creates logins, extended event sessions, and all objects required by QMonitor. It also grants all required permissions. For a complete description of the setup script, see Set up your SQL Server instance
  • Help: Displays a quick guide to help you create an agent, install it, configure instances, and start monitoring.

At the top of the list is a search bar. Use it to filter the instances shown on the page. Enter any keyword to display servers with a matching name, description, tag, or other text field. You can also enter a version number, such as “2022”, to show only servers with that version.

The list of instances is grouped by agent and by groups/subgroups:

Agent Default
├─ Group A
│ ├─ Subgroup X
│ │ ├─ Instance 1
│ ├─ Instance 2
│ ├─ Instance 3
│ ├─ Instance 4
├─ Group B
│ ├─ Subgroup Y
│ │ ├─ Instance 5
│ │ ├─ Instance 6
│ ├─ Instance 7
│ ├─ Instance 8
Agent Custom
├─ Instance 9
├─ Instance 10

Next to each Agent name, you have the following controls:

  • Status icon: Shows the status of the agent (ok, not running, or not installed). Click the status icon or label to open a dialog with more information.
  • Edit button: Opens a dialog to enter a new name for the agent. Not allowed for agents that have already been installed.
  • Queries: Opens a dialog to assign custom queries to an agent. Queries are defined in the Queries page and assigned to individual agents using this dialog.
  • Delete: Deletes the agent. Be careful: Deleting an agent does not delete the service on the machine where it is installed, does not uninstall the agent, and does not remove any files. You must perform these tasks manually. Also, deleting an agent is not reversible, and data collection at the client will stop immediately.

Nested inside the agents, you may have instances directly, or groups and subgroups that contain instances.

The instance name is a hyperlink to the Instance Overview dashboard. Next to the name, you have:

  • A copy button to quickly copy the name to the clipboard
  • The version tag
  • Any tags you added to the instance

If the instance is an Azure SQL Database, Azure SQL Database Pool, or Azure SQL Managed Instance, a blue tag will appear.

On the right side of the list, you have the following controls:

  • Status icon: Shows the status of the instance (ok, ready, or not ready). Click the status icon or label to open a dialog with more information.
  • Edit button: Opens the edit instance page where you can edit all instance details: connection string, tags, description, and more.
  • Delete: Deletes the instance. Once deleted, the instance stops collecting data. However, existing data remains visible on dashboards until it expires from retention.

9 - Dashboards

Use dashboards to monitor instance health metrics

Using the taskbar on the left, you can click on the topmost button to open a list of the available dashboards, that you can use to monitor your SQL Server instances.

QMonitor uses Grafana dashboards: Grafana is a powerful data analytics platform that provides advanced dashboarding capabilities and represents a de-facto standard for monitoring and observability applications.

All the data in the dashboards can be filtered using the time filter on the top right corner: it offers predefined quick time ranges, like “Last 5 minutes”, “Last 1 hour”, “Last 7 days” and so on. These are usually the easiest way to select the time range.

If you want, you can also use absolute time ranges, that you can select with the calendar on the left side of the time picker popup. You can use the calendar buttons on the From and To fields to pick a date or you can enter the time range manually.

9.1 - Global Overview

An overall view of your SQL Server estate

The Global Overview dashboard is your entry point to the SQL Server infrastructure: it provides an at-a-glance view of all the instances, along with useful performance metrics.

At the top left of the dashboard, you have KPIs for the total number of monitored instances, divided between on-premises and Azure instances. At the top right you have the same KPI for the total number of monitored databases, again divided between on-premises and Azure.

The middle of the dashboard contains the Instances Overview table, with the following information:

  • SQL Instance: The name of the instance. For on-premises SQL Servers, this corresponds to the name returned by @@SERVERNAME, except that the backslash is replaced by a colon in named instances (you have SERVER:INSTANCE instead of SERVER\INSTANCE).
    For Azure SQL Managed Instances and Azure SQL Databases, the name is the network name of the logical instance.
  • Database: for Azure SQL Databases, the name of the database
  • Elastic Pool: for Azure SQL Databases, the name of the elastic pool if in use, <No Pool> otherwise.
  • Database Count: the number of databases in the instance
  • Edition: the edition of SQL Server (Enterprise, Standard, Developer, Express). For Azure SQL Databases it is “Azure SQL Database”. For Azure SQL Managed Instances, it can be GeneralPurpose or BusinessCritical.
  • Version: The version of SQL Server. For Azure SQL Database it contains the service tier (Basic, Standard, Premium…)
  • Last CPU: the last value captured for CPU usage in the selected time interval
  • Average CPU: the average CPU usage in the time interval
  • Lowest disk space %: the percent of free space left in the disk that has the least space available. For Azure SQL Databases and Azure SQL Managed Instances the percentage is calculated on the maximum space available for the current tier.

At the bottom of the dashboard, you have the detail of the disk space available on all instances. The table contains the following information:

  • SQL Instance: the name of the instance, Azure SQL Database or Azure SQL Managed Instance.
  • Database: for Azure SQL Databases, the name of the database
  • Elastic Pool: for Azure SQL Databases, the name of the elastic pool if in use, <No Pool> otherwise.
  • Volume: drive letter or mount point of the volume
  • Free %: Percentage of free space in the volume
  • Available Space: Available space in the volume. The unit measure is included in the value.
  • Used Space: Used space in the volume
  • Total Space: Size of the Volume (Used space + Available space)

9.2 - Instance Overview

Detailed information about the performance of a SQL Server instance

This dashboard is one of the main sources of information to control the health and performance of a SQL Server instance. It contains the main performance metrics that describe the behavior of the instance over time.

At the top you can find the Instance Info section, where the properties of the instance are displayed. You have information about the name, version, edition of the instance, along with hardware resources available (Total Server CPUs and Total Server Memory).

You also have KPIs for the number of databases, with the counts for different states (online, corrupt, offline, restoring, recovering and recovery pending).

At the bottom of the section, you have a summary of the state of any configured Always On Availability Groups.

Cpu & Wait Stats

At the top of this section you have the chart that represents the percent CPU usage for the SQL Server process and for other processes on the same machine.

The second chart represents the percent CPU usage by resource pool. This chart will help you understand which parts of the workload are consuming the most CPU, according to the resource pool that you defined on the instance. If you are on an Azure SQL Managed Instance or on an Azure SQL Database, you will see the predefined resource pools available from Azure, while on an Enterprise or Developer edition you will see the user defined resource pools. For a Standard Edition, this chart will only show the internal pool.

The Wait Stats (by Category) chart represents the average wait time (per second) by wait category. The individual wait classes are not shown on this chart, which only represents wait categories: in order to inspect the wait classes, go to the Geek Stats dashboard.

Memory

This section contains charts that display the state of the instance in respect to the use of memory. The chart at the top left is called “Server Memory”, and shows Target Server Memory vs Total Server Memory. The former represents the ideal amount of memory that the SQL Server process should be using, the latter is the amount of memory currently allocated to the SQL Server process. When the instance is under memory pressure, the target server memory is usually higher than total server memory.

The second chart shows the distribution of the memory between the memory clerks. A healthy SQL Server instance allocates most of the memory to the Buffer Pool memory clerk. Memory pressure could show on this chart as a fall in the amount of memory allocated to the Buffer Pool.
Another aspect to keep under control is the amount of memory used by the SQL Plans memory clerk. If SQL Server allocates too much memory to SQL Plans, it is possible that the cache is polluted by single-use ad-hoc plans.

The third chart displays Page Life Expectancy. This counter is defined as the amount of time that a database page is expected to live in the buffer cache before it is evicted to make room for other pages coming from disk. A very old recommendation from Microsoft was to keep this counter under 5 minutes every 4 Gb of RAM, but this threshold was identified in a time when most servers had mechanical disks and much less RAM than today.
Instead of focusing on a specific threshold, you should interpret this counter as the level of volatility of your buffer cache: a too low PLE may be accompanied by elevated disk activity and higher disk read latency.

Next to the PLE you have the Memory Grants chart, which represents the number of memory grant outstanding and pending. At any time, having Memory Grants Pending greater that zero is a strong indicator of memory pressure.

Lazy Writes / sec is a counter that represents the number of writes performed by the lazy writer process to eliminate dirty pages from the Buffer Pool outside of a checkpoint, in order to make room for other pages from disk. A very high number for this counter may indicate memory pressure.

Next you have the chart for Page Splits / sec, which represents how many page splits are happening on the instance every second. A page split happens every time there is not enough space in a page to accommodate new data and the original page has to be split in two pages.
Page splits are not desirable and have a negative impact on performance, especially because split pages are not completely full, so more pages are required to store the same amount of information in the Buffer Cache. This reduces the amount of data that can be cached, leading to more physical I/O operations.

Activity

This section contains charts that display multiple SQL Server performance counters.

First you have the User Connections chart, which displays the number of active connections from user processes. This number should be consistent with then number of people or processes hitting the database and should not increase indefinitely (connection leak).

Next, we have the number of Compilations/sec vs Recompilations/sec. A healthy SQL Server database caches most of its execution plans for reuse, so that it does not need to compile a plan again: compiling plans is a CPU-intensive operation and SQL Server tries to avoid it as much as it can. A rule of thumb is to have a number of compilations per second that is 10% of the number of Batch Requests per second. A workload that contains a high number of ad-hoc queries will generate a higher rate of compilations per second.
Recompilations are very similar to compilations: SQL Server identifies in the cache a plan with one or more base objects that have changed and sends the plan to the optimizer to recompile it.
Compiles and recompiles are expensive operations and you should look for excessively high values for these counters if you suffer from CPU pressure on the instance.

The Access Methods charts displays Full Scans/sec vs Index Searches/sec. A typical OLTP system should get a low number of scans and a high number of Index Searches. On the other hand, a typical OLAP system will produce more scans.

The Transactions/sec panel displays the number of transactions/sec on the instance. This allows you to identify which database is under the higher load, compared to the ones that are not heavily utilized.

TempDB

This section contains panels that describe the state of the Tempdb database. The tempdb database is a shared system database that is crucial for SQL Server performance.

The Data Used Space displays the allocated File(s) size compared to the actual Used Space in the database. Observing these metrics over time allows you to plan the size of your tempdb database, avoiding autogrow events. It also helps you size the database correctly, to avoid wasting too much disk space on a data file that is never entirely used by actual database pages.

The Log Used Space panel does the same, with log files.

Active Temp Tables shows the number of temporary tables in tempdb. This is not only the number of temporary tables created explicitly from the applications (table names with the # or ## prefix), but also worktables, spills, spools and other temporary objects used by SQL Server during the execution of queries.

The Version Store Size panel shows the size of the Version Store inside tempdb. The Version Store holds data for implementing optimistic locking by taking transaction-consistent snapshots of the data on the tables instead of imposing locks. If you see the size of Version Store going up continuously, you may have one or more open transactions that are not being committed or rolled back: in that case, look for long standing sessions with transaction count greater than one.

Database & Log Size

Database Volume I/O

Queries

9.2.1 - Query Detail

Detailed information about a specific SQL query

The Query Detail dashboard displays details for a single SQL query.

The top panel shows the query text as QMonitor captured it. Queries generated by ORMs or written on a single line can be hard to read. Click “Format” to apply readable SQL formatting.

Click “Copy to Clipboard” to copy the query for running or analysis in external tools (such as SSMS). Use “Download” to save the query as a .sql file.

The table below lists all executions of this query within the selected time range. QMonitor captures a sample every 15 seconds: long-running queries will produce multiple samples, and queries running at the instant of capture will produce a sample as well.

Samples alone may not fully reflect a query’s resource usage or execution time. For a complete impact analysis, rely on the Query Stats data: Query Stats.

9.3 - SQL Server Events

Events analysis

The Events dashboard shows the number of events that occurred on the SQL Server instance during the selected time range.

The top chart breaks events down by type:

  • Errors
  • Deadlocks
  • Blocking
  • Timeouts

Expand a row to view a chart for that event type by database and a list of individual events. Click a row’s hyperlink to open a detailed dashboard for that event type, where you can inspect the event details.

9.3.1 - Errors

Details about errors occurring on the instance

Expand the “Errors” row to see a chart that shows the number of errors per database over time.

Below the chart, a table lists individual error events with these columns: timestamp, database name, application name, host name, username, severity, error number, and error message.

Only errors with severity >= 16 are included. Error number 17830 is excluded because it can occur very frequently.

Use the filter controls in the column headers to filter the table. Click a column header to sort by that column: each click cycles through ascending, descending, and no sort.

Click the link in the Event Sequence column to open the error details dashboard. It shows the full error message and, when available, the SQL statement that caused the error. The SQL text may be unavailable for some error types.

9.3.2 - Deadlocks

Information on deadlocks

Expand the “Deadlocks” row to view a chart that shows the number of deadlocks for each database.

SQL Server detects a deadlock when two or more sessions block each other so none can proceed. To resolve the conflict, SQL Server selects a victim and terminates that session’s statement. QMonitor captures deadlock events and stores the deadlock graph (XML) for analysis.

Under the chart is a list of deadlock events with columns for time, sequence, database, and user name. Use the column filters and sort controls to filter and sort the table.

Click a row to open the Deadlock detail dashboard.

The Deadlock XML panel displays the deadlock graph in XML format. That graph contains nodes for processes, resources, execution stacks and inputbuf; documenting every node is beyond the scope of this documentation.

The XML includes one or more nodes that identify the victim and the participating processes, and that provide details about the resources and SQL statements involved. Use the graph and statements to identify the conflict and to find candidate fixes (indexing, query changes, or retry logic).

The bottom grid lists sessions that were running around the event time, giving a quick overview of related activity. Use the buttons above the grid to set the time window around the event from 1 to 15 minutes.

9.3.3 - Blocking

Blocking Events

Expand the “Blocking” row to view a chart that shows the number of blocking events for each database.

Blocking events are raised by SQL Server when a session waits on a lock longer than the blocked process threshold. By default the threshold is 0 (no events). The QMonitor setup script sets the threshold to 10 seconds as a recommended starting point. If you see too many events, increase the threshold. After you’ve resolved most blocking events, you can experiment with lowering the threshold.

Under the chart is a list of blocking events with columns for time, sequence, database, object ID, duration, lock mode, and resource owner type. Use the column filters and sort controls to filter and sort the table.

Click a row to open the Blocking detail dashboard.

The top table in the detail view shows the same event information as the events dashboard. The Blocking XML panel displays the blocked process report in XML format. That report contains many nodes; documenting every node is beyond the scope of this documentation.

The XML includes one or more nodes and one or more nodes. These nodes identify the blocked and blocking processes and provide details about the resources the blocked process was waiting on.

The bottom grid lists sessions that were running around the event time, giving a quick overview of blocking and blocked processes. Use the buttons above the grid to set the time window around the event from 1 to 15 minutes.

9.3.4 - Timeouts

Information on query timeouts

Expand the “Timeouts” row to view a chart that shows the number of timeouts for each database.

Query timeouts occur when a client, application, or command exceeds its configured timeout before the operation completes. QMonitor captures timeout events and records the error text, session details, and, when available, the SQL text.

Under the chart is a list of timeout events with columns for time, sequence, database, duration, application, username and duration. Use the column filters and sort controls to filter and sort the table.

Click a row to open the Timeout detail dashboard.

The top table in the detail view shows the same event information as the events dashboard. The Timeout Statement panel displays the SQL statement found int the timeout event. The SQL text may be unavailable for some event types.

The bottom grid lists sessions that were running around the event time, giving a quick overview of related activity. Use the buttons above the grid to set the time window around the event from 1 to 15 minutes.

Investigate frequent timeouts by reviewing the SQL, execution plan, and wait types. Consider query tuning or indexing, increase client timeouts only after identifying the root cause, or add retry logic where appropriate.

9.4 - SQL Server Agent Jobs

Check the job activity

The SQL Server Agent Jobs dashboard provides a compact view of job activity and execution history so you can monitor health, spot failures, and investigate scheduling or duration issues.

Jobs Overview

  • KPIs show totals for the selected interval and instances:
    • Total Job Executions: total runs observed.
    • Jobs Succeeded: completed successfully.
    • Jobs Failed: finished with an error.
    • Jobs Retried: runs that retried after transient failures.
    • Jobs Canceled: runs canceled manually or programmatically.
    • Jobs In Progress: currently running jobs.
  • Use these KPIs for a quick health check and to detect elevated failure or retry rates that need attention.

Job Summary

  • A summary table groups executions by job and highlights aggregate stats:
    • Job Name
    • Total Executions
    • Average Duration
    • Max Duration
    • Last Executed At
    • Last Outcome
    • Last Duration
  • Sort and filter the table to prioritize jobs with long or frequently varying durations, or with recent failures.

Job Execution Timeline

  • A Gantt-style timeline plots each job as a row with start/end bars for individual executions across the selected interval.
  • Bars are color-coded by status (succeeded, failed, in progress) so you can quickly see scheduling conflicts, overlapping runs, and periods with failures.
  • Zoom and pan to inspect specific windows and correlate with other metrics.

Job Execution Details

  • A detailed table lists individual executions with full context:
    • Job Name
    • Job ID
    • Job Duration
    • Start Time
    • End Time
    • Job Status
    • Execution Type (Scheduled, Manual, etc.)
    • Error Message (when available)
  • Click a row to view step-level output, error details, and the execution log.

Investigation tips

  • Filter by instance, owner, or outcome to isolate problematic jobs.
  • Correlate job failures and long durations with CPU, I/O, and blocking at the same timestamps to find root causes.
  • For recurring transient failures, consider retry logic or schedule changes to avoid resource contention windows.
  • Use the timeline to detect overlapping schedules; stagger long-running jobs to reduce contention.

9.5 - Query Stats

General Workload analysis

The Query Stats dashboard summarizes workload characteristics and surfaces high cost queries so you can prioritize tuning and capacity decisions.

At the top:

  • Worker Time by Database: a chart that shows query worker time (CPU) attributed to each database. Use this to spot databases driving CPU usage and to compare trends over time.
  • Logical Reads by Database: a chart that shows logical page reads per database. High or rising reads indicate I/O pressure or inefficient query plans.

Below the charts are three sections that drill into queries and their history.

Query Stats by Database and Query

  • Shows top queries grouped by database and text (or normalized text).
  • Columns include cumulative worker time, logical reads, duration, and execution count. Use this view to find the heaviest queries within a specific database.
  • Use filters and sorting to narrow by database, application, or host, then click a row to open the query detail dashboard.

Query Stats by Query

  • Aggregates statistics across databases for identical or normalized query text. This helps identify widely-used queries or shared code paths that impact multiple databases.
  • Columns include totals and averages (CPU, reads, duration, executions) and can be used to detect candidates for indexing, parameterization, or query rewrite.

Query Regressions

  • Highlights queries with significant changes in performance over the selected time window (for example, increases in duration, CPU, or logical reads).
  • Uses baseline comparisons and historical plan information to flag likely regressions caused by plan changes, data distribution shifts, or blocking.
  • Click a query hash to inspect historical plans, execution stats, and complete query text.

Query Store and data sources

  • Query statistics are gathered from QMonitor event capture and, when enabled, from SQL Server Query Store. Query Store provides persisted plan history and runtime aggregates that are useful for regression analysis and plan forcing.
  • If Query Store is disabled or unavailable, QMonitor will use Query Stats.

Investigating queries

  • Drill into a query to view SQL text, execution plans, and example runtime statistics. Compare current and historical plans to identify plan changes.
  • Check execution counts — high cumulative cost with many executions may be fixed via caching or tuning; high single-execution cost may need plan-level fixes.
  • Use execution plans, wait types, and logical reads to decide between indexing, query rewrite, statistics updates, or parameter sniffing mitigations.

Controls and tips

  • Use the time-range selectors and per-column filters to focus analysis on the relevant interval and workloads.
  • Sort by cumulative worker time or logical reads to prioritize the biggest opportunities.
  • When investigating regressions, capture a longer history where possible to distinguish transient spikes from sustained regressions.

9.5.1 - Query Stats Detail

Detailed statistics about a specific SQL server query

The Query Stats Detail dashboard focuses on a single query (text or normalized text) and shows how each compiled plan for that query performed over the selected interval.

Top: Query text

  • The full SQL text (or normalized text) is shown at the top for context.
  • Use the copy and download controls to move the text into SSMS or a local editor.
  • Use the format button to view a formatted version of the query text

Plans summary table (Totals by plan)

  • A compact table that lists each plan compiled for this query with totals and averages:
    • Execution count
    • Worker time (total)
    • Total time (total)
    • Averages (worker time / exec, total time / exec)
    • Rows returned
    • Memory grant (total)
  • Use the table to quickly identify the highest-cost plans and their relative impact.

Totals section

  • This section contains a time-series table of 5-minute aggregated samples for the selected time frame. Each row represents a 5-minute bucket and includes:
    • Sample time
    • Execution count
    • Logical reads
    • Logical writes
    • Memory
    • Physical reads
    • Rows
    • Total time
    • Worker time
    • Plan hash (clickable)
  • Plan hash actions:
    • Click a plan hash to download the plan as a .sqlplan file.
    • Open the downloaded .sqlplan in SSMS to inspect the graphical plan and operator costs.
  • Charts in Totals:
    • Execution count by plan
    • Memory by plan
    • Total time by plan
    • Worker time by plan
  • Use these charts to compare plan behavior over time and to spot periods where a specific plan dominated resource usage.

Averages section

  • Similar to Totals, but values are averaged per sample or per execution:
    • The table shows the same 5-minute samples with averaged metrics (per-exec averages where applicable).
    • Columns mirror the Totals table (execution count, reads, writes, memory, physical reads, rows, total time, worker time, plan hash).
  • Charts in Averages:
    • Total time (avg) by plan
    • Worker time (avg) by plan
  • Use averages to find plans with high per-execution cost even if execution counts are low.

Controls and investigation tips

  • Use the dashboard time-range controls to focus on the interval of interest.
  • Filter or sort the tables by plan hash, execution count, or cost metrics to prioritize investigation.
  • Download and open .sqlplan files in SSMS to review operator costs, warnings, and estimated vs actual row counts.
  • Compare Totals and Averages: high totals with low averages usually indicate frequent cheap executions; high averages suggest expensive single executions.

9.6 - Capacity Planning

An overall view of resource consumption to plan resource upgrades

This dashboard presents historical CPU capacity and utilization for your SQL Server instances so you can spot trends, judge current load, and predict when additional resources will be needed. Use the KPIs, charts, and summary table to compare differently sized servers on a common scale, identify sustained or growing load, and prioritize upgrades, VM rightsizing, or consolidation.

CPU History

  • The CPU History section begins with KPIs that summarize the selected instances: total available cores, the average CPU utilization expressed on a per-core basis (normalized to one core), and the average number of cores effectively used. Normalizing lets you compare a small VM and a large host on the same footing — for example, 20% average CPU on a 4‑core host becomes 80% when scaled to a single-core equivalent (20 * 4).
  • “Cores used” converts that normalized percentage into an estimated count of cores in use (Avg CPU% * Total Server Cores / 100). These KPIs provide a quick sense of both intensity (per-core pressure) and absolute demand.

Charts and trends

  • SQL Server CPU Usage (normalized to 1 core) is a time-series view that scales each instance’s CPU to a single-core equivalent. Use it to compare intensity across instances and to see how load changes over time.
  • SQL Server Core Usage shows estimated cores in use over time. This chart helps you understand aggregate core demand, spot sustained increases, and evaluate capacity headroom.

Summary table

  • The table ties the charts to individual instances and provides:
    • SQL Instance name
    • Avg CPU Usage % over the selected interval
    • Total Server Cores assigned to the host
    • CPU Usage % (Normalized) = Avg CPU% * Total Server Cores
    • Cores Used (Normalized) = (Avg CPU% * Total Server Cores) / 100
  • Use the table to rank instances by CPU absolute usage, and to identify machines that would benefit from deeper investigation or redistribution of workload.

Interpreting results and next steps

  • Look for rising trends in normalized CPU% or sustained high cores-used values; these indicate growing pressure that should be addressed before performance degrades.
  • For sustained high per-core utilization, investigate top CPU queries, parallelism settings, or workload placement. For high aggregate core demand, consider adding cores, resizing VMs, or offloading noncritical workloads.
  • Correlate CPU trends with memory, I/O, and wait-type dashboards to form a complete capacity plan; allow headroom (commonly 20–30%) for growth and transient spikes unless autoscaling is available.

Controls and tips

  • Adjust the dashboard time range to reveal hourly, daily, or weekly trends depending on your planning horizon. Filter by environment (prod/test) or cluster (applications or domain) to focus analysis. Combine these views with query stats dashboards to identify root causes before changing capacity.

Data & Log Size

This section tracks historical data and log file growth so you can spot rapidly growing databases and plan storage capacity or maintenance.

Top: Data & Log KPIs

  • Initial Data Size: database data file size at the start of the selected interval (GB).
  • Latest Data Size: most recent data file size (GB).
  • Data Growth: growth between initial and latest sizes.
  • Initial Log Size: log file size at the start of the interval (GB).
  • Latest Log Size: most recent log file size (GB).
  • Log Growth: growth for the log file.

Charts

  • Data Size over time: time-series chart showing data file size changes for selected servers. Use it to detect steady growth or sudden jumps.
  • Log Size over time: time-series chart showing log file size trends

Summary table

  • Columns:
    • SQL Instance
    • Database
    • Initial Data Size
    • Latest Data Size
    • Data Growth
    • Initial Log Size
    • Latest Log Size
    • Log Growth
  • Use the table to rank databases by growth, find candidates for archive, compression, index maintenance, or retention policy changes, and to plan storage purchases or quota adjustments.

Interpretation and actions

  • Rapid, sustained data growth may indicate new workloads, retention changes, or missing cleanup jobs — investigate recent deployments and ETL processes.
  • Large or growing log files often point to long-running transactions, infrequent log backups (in full recovery), or heavy bulk operations — review backup and recovery settings and transaction patterns.
  • Use time-range filters to focus on growth windows (daily, weekly, monthly) and sort the table by growth to prioritize action.

Disk Usage

This section shows historical disk latency and throughput so you can spot I/O bottlenecks and capacity issues affecting database performance.

Top: Disk KPIs

  • Avg Read Latency: average read latency observed over the selected interval (ms).
  • Avg Read Bytes/sec: average read throughput (bytes/sec).
  • Avg Write Latency: average write latency observed over the selected interval (ms).
  • Avg Write Bytes/sec: average write throughput (bytes/sec).

Charts

  • Avg Read/Write Latency: time-series chart showing read and write latency over time. Use this to spot periods of elevated latency and correlate with changes and trends in the workload.
  • Total Throughput: time-series chart of Read Bytes/sec and Write Bytes/sec. Use this to identify sustained high throughput or bursts that may saturate the storage subsystem.

Disk Usage Summary table

  • Columns:
    • SQL Server Instance
    • Avg Read Latency
    • Read Bytes/sec
    • Avg Write Latency
    • Write Bytes/sec
  • The table summarizes latency and throughput per instance so you can rank servers by I/O pressure and prioritize investigation or remediation.

Interpretation and actions

  • Elevated read or write latency often points to storage contention, slow disks, or high queue depth; correlate with throughput and queue metrics.
  • High read throughput with low latency may indicate healthy caching; high throughput with rising latency suggests the cache is saturated or the storage tier is overloaded.
  • For write-heavy workloads, review transaction patterns, log placement, and backup frequency; consider faster storage or write-optimized tiers.
  • Use instance filters and time-range controls to isolate problematic windows and correlate with query, CPU, and I/O dashboards before changing hardware or storage tiers.

Memory Usage

This section shows memory demand and headroom so you can detect pressure on the buffer pool and plan memory changes or workload placement.

Top: Memory KPIs

  • Avg Allocated Memory: average memory allocated to SQL Server over the selected interval (MB or GB).
  • Max Allocated Memory: peak memory allocation observed during the interval.
  • Avg Target Memory: average target memory SQL Server attempted to obtain (based on internal heuristics / memory clerk targets).
  • Max Target Memory: peak target memory during the interval.

Charts

  • Memory usage over time: time-series chart showing Avg Allocated Memory, Max Allocated Memory, Avg Target Memory, and Max Target Memory. Use this to spot sustained allocation near target (indicating pressure) or sudden spikes in demand.

Summary table

  • Columns:
    • SQL Instance
    • Avg Allocated Memory
    • Max Allocated Memory
    • Avg Target Memory
    • Max Target Memory
  • The table provides a per-instance snapshot to help rank servers by memory consumption and to identify candidates for resizing or investigation.

Interpretation and actions

  • Large gaps between Target and Allocated Memory may indicate internal limits; investigate OS signals, memory clerks, and max memory configuration.
  • Correlate memory trends with CPU and I/O dashboards to determine whether adding memory will reduce I/O or improve overall performance before changing server sizing.
  • Use time-range filters to identify daily or weekly patterns and to size for peak demand with appropriate headroom.

9.7 - Index Analysis

Missing Indexes and Possible bad Indexes

The Index Analysis dashboard helps you prioritize index work by showing high-value missing index suggestions and identifying existing indexes that may be hurting performance because they attract writes but provide little read benefit.

The values in the dashboard are based on a snapshot of the last 24 hours.

Missing Indexes (top)

  • This table lists optimizer-suggested indexes and helps estimate potential query benefit:
    • Object Name: the table or view that would benefit from the index.
    • Advantage: a short description of the expected improvement (e.g., reduced logical reads).
    • Impact: an estimate of the relative benefit across the workload.
    • Equality columns: columns recommended for equality predicates (key columns).
    • Inequality columns: columns recommended for range/inequality predicates.
    • Included columns: non-key columns suggested for covering the queries.
    • User Seeks: number of seeks that could use the suggested index.
    • Last Seek: timestamp of the last observed seek opportunity.
    • Unique compiles: compilation counts that referenced the missing index.
    • User cost: estimated cumulative cost reduction if the index were present.
  • Use this table to find high-impact index candidates. Review the SQL that benefits, confirm selectivity and cardinality, and weigh expected read savings against write and storage costs before creating an index.

Possible Bad Indexes (below)

  • This table surfaces existing indexes that may be candidates for removal, consolidation, or rebuild because they incur write overhead without enough read benefit:
    • Database: database name containing the index.
    • Schema: schema name.
    • Table: table name.
    • Index: index name.
    • Total Writes: cumulative writes (inserts/updates/deletes) affecting the index.
    • Total Reads: cumulative reads (scans/seeks) that used the index.
    • Difference: Total Writes minus Total Reads (high positive values suggest write-heavy, low-used indexes).
    • Fill factor: configured fill factor for the index (indicates fragmentation risk).
    • Disabled: whether the index is disabled.
    • Hypothetical: whether the index is hypothetical (not actually created).
    • Filtered: whether the index uses a filter predicate.
  • Sort and filter the table to find indexes with large write cost and minimal read benefit. Investigate usage patterns and test removal or consolidation in a non-production environment before dropping indexes.

Controls and investigation tips

  • Filter by database, schema, or table to narrow scope.
  • Click a row to open index details (definition, sample queries, historical usage) and to download index DDL for review.
  • Consider the write cost, storage, and maintenance impact when acting on missing index suggestions. Missing-index estimates are heuristic — validate with query tests, execution plans, and Explain/actual runs.
  • For possible bad indexes, evaluate whether a filtered index, different key/included columns, or index consolidation would preserve read benefits while reducing write overhead.

9.8 - Custom Metrics

Custom Metrics

This dashboard displays custom metric measurements pulled from the selected measurement source.

Top controls

  • Measurement selector: a dropdown to choose which measurement to query.
  • Time filter: applies the selected time window to the query.

Data table

  • The table below the selector shows the measurement data for the chosen measurement and time range. Typical columns include timestamp, value, and any measurement tags or labels.
  • The dashboard retrieves up to 10,000 data points for the selected query. If the time range or measurement produces more points, results are truncated to this limit.

Usage notes

  • Select the desired measurement, choose an appropriate time window, and refresh the view to populate the table.
  • Narrow the time range or apply server/instance filters when results are truncated due to the 10,000‑point limit.
  • Export or copy table rows for offline analysis if needed.

Future capability

  • We are working on support for creating custom dashboards directly from these measurements. This feature is in development and will be available soon.

9.9 - Geek Stats

Geek Stats

The Geek Stats dashboard exposes low-level contention and synchronization metrics so you can diagnose waits and spinlock behavior that can impact throughput or cause unexpected CPU consumption. Use these views to find hotspots, correlate with higher-level symptoms, and guide targeted fixes.

Not all users are interested in this type of metrics, so we decided to dedicate a dashboard instead of including this information in the Instance Overview dashboard.

Wait Stats

  • Wait Stats (by category): a top-level chart that groups waits into the same categories used on the Instance Overview. This view helps you quickly see which broad wait families (I/O, CPU-related, latch/lock, network, etc.) dominate during the selected interval.
  • Wait Stats (by type): a detailed chart that shows individual SQL Server wait types (the same names you see in SQL Server DMVs). Use this to identify specific waits such as PAGEIOLATCH, CXPACKET, SOS_SCHEDULER_YIELD, or ASYNC_NETWORK_IO and to track their trend over time. Filter and sort to surface the top contributing wait types and drill into the time windows where they spike.

Spinlock Stats

  • The Spinlock section presents four charts that measure spinlock activity:
    • Collisions: how often threads encountered a collision on a spinlock.
    • Spins: total spin attempts observed.
    • Spins per collision: average number of spins required for each collision, indicating how costly each contention event is.
    • Backoffs: counts of threads backing off (yielding) after spinning.
  • Interpret spin metrics together: high collisions with high spins-per-collision imply frequent, costly busy-waiting and wasted CPU. High backoffs suggest threads are repeatedly yielding and retrying.
  • Use these charts to correlate spinlock pressure with CPU spikes or scheduling issues, and to prioritize micro-level fixes (e.g., addressing hot memory structures, reducing shared-state contention, or applying product fixes).

Investigation tips

  • When you see elevated waits or spinlock activity, correlate timestamps with query CPU, I/O, and scheduler metrics to find root causes.
  • High category-level waits point you to broad problem areas; the by-type view helps you pinpoint exact resources or operations involved.
  • For spinlock issues, examine workload patterns that touch shared caches, global counters, or frequently-updated metadata. Changes such as reducing contention hotspots, batching updates, or upgrade/patches may help.

9.10 - Always On Availability Groups

Check High Availability of databases

This dashboard provides an at-a-glance overview of Always On Availability Groups (AGs) across the selected instances. Use it to verify AG health, replica roles, and synchronization status, and to quickly locate groups that need attention.

Availability Groups table

  • Availability Group: AG name (click to open the AG detail dashboard).
  • Primary Replica: the current primary replica host for the AG.
  • Secondary Replicas: comma-separated list of configured secondary replicas.
  • Total Nodes: number of replicas configured in the AG.
  • Online Nodes: replicas currently online and reachable.
  • N. Databases: number of databases protected by the AG.
  • Synchronization Health: overall sync state (Healthy, Not Healthy) based on replica synchronization and failover readiness.
  • Listener DNS Name: cluster listener DNS name, if configured.
  • Listener IP: listener IP address or addresses.

Usage

  • Click an AG name to view the detail dashboard for per-replica metrics, database synchronization progress, failover readiness.
  • Filter or sort the table to find AGs with offline nodes, unsynchronized databases, or other anomalies that require investigation.

9.10.1 - Always On Availability Group Detail

Check the state of a High Availability Group

This dashboard shows detailed health and replication telemetry for a single Always On Availability Group (AG). Use it to verify replica roles, track failovers, monitor data movement, and identify databases that need attention.

Top: AG summary

  • Availability Group: AG name
  • Primary Replica: current primary replica host
  • Secondary Replicas: configured secondaries
  • Total Nodes: count of configured replicas
  • Online Nodes: replicas currently online and reachable
  • N. Databases: number of databases in the AG
  • Synchronization Health: overall sync state for the AG
  • Listener Name: cluster listener DNS name (if configured)
  • Listener IP: listener IP address(es)

Primary Replica Failovers timeline

  • A timeline that shows which replica was primary at each point in time.
  • Use it to review recent failovers and to correlate role changes with events or performance anomalies.

Availability Group Nodes table

  • Replica Instance: instance name for each replica
  • Replica role: Primary / Secondary
  • Sync. Health: per-replica synchronization status
  • Availability Mode: synchronous / asynchronous
  • Failover Mode: automatic / manual
  • Seeding Mode: automatic / manual
  • Secondary Allow Connections: read-intent settings for secondaries
  • Backup Priority: priority used for backup routing
  • Endpoint URL: data movement endpoint
  • R/O Routing URL: read-only routing address (if configured)
  • R/W Routing URL: read-write routing address (if configured)

Nodes KPIs and online history

  • KPIs: Total Nodes and Offline Nodes for quick situational awareness.
  • Online Nodes chart: time-series showing the number of online replicas over the selected interval to spot outages or flapping nodes.

Transfer rates and queues

  • Transfer Rates chart: Send Rate (how fast the primary sends changes) and Redo Rate (how fast secondaries apply changes). Use to spot slow secondaries or network saturation.
  • Transfer Queue Size chart: Send Queue Size and Redo Queue Size. Growing queues indicate replication lag or bottlenecks that may affect failover readiness.

Health history charts

  • Online node history: online vs total nodes over time to visualize availability trends.
  • Database Health History: healthy databases vs total databases to track when databases become unsynchronized or unhealthy.

Databases Replication Status table

  • SQL Instance: instance hosting the database replica
  • Database Name: database name
  • Sync. Health: synchronization status for the database
  • Is Primary Replica: indicates whether this row is the primary
  • Availability Mode: database-level availability mode (inherits from AG)

Usage and investigation tips

  • Correlate failover times with primary timeline and with performance metrics (CPU, I/O) to find causes of role changes.
  • Increasing send/redo queues or sustained low redo rates often point to network, disk, or resource contention on secondaries — investigate those hosts before initiating failover or taking corrective action.

10 - Issues

Check instance health with issues

This page lists all issues raised across monitored SQL Server instances.

An issue is created whenever an instance violates one or more rules defined for that instance. Rules are organized as Policies and Predicates: a Policy is a container, and Predicates are the individual tests that are evaluated.

Evaluation engine

  • The background evaluation engine runs policies on a schedule:
    • Diagnostic policy: checks instance configuration and best-practice enforcement. Runs once per day.
    • Performance policy: evaluates runtime performance predicates. Runs every 5 minutes.
  • If any predicate in a policy fails during evaluation, an issue is created.
  • When a previously failing predicate later succeeds, the corresponding issue is automatically closed.

Issue lifecycle

  • Issues behave like ticketed findings: they describe a problem and include context, severity, and links to the instance and failing predicate.
  • The system avoids duplicate open issues: if an issue is already open for the same predicate/instance, further failing evaluations do not create new issues; the existing issue is kept and updated as needed.

List view and filters

  • The Issues page shows the full set of issues; use the controls at the top of the page to filter the list:
    • Open vs All: show only open issues (default) or include closed issues.
    • Instance filter: select one instance from the dropdown.
    • Text filter: search by text in issue title, description or instance name.
    • Flagged only: show only issues marked for follow-up.
  • Use the filters to focus on current operational problems or to review historical findings.

Selection and bulk actions

  • Next to the filters are controls to Select All or Unselect All issues.
  • Each row has a checkbox; when one or more issues are selected the toolbar shows a “Close Selected” button. Use that button to close all selected issues in a single operation.
  • Bulk close preserves issue metadata and records the closing actor and time.

Grouped view (by predicate)

  • At the top-left of the toolbar a toggle switches the list between the normal individual-issue view and a grouped view organized by predicate.
  • In grouped view each row represents a predicate and shows counts of issues for that predicate. Use this view to quickly spot frequently failing predicates and to prioritize policy-level fixes.
  • Click a grouped row to expand and see the underlying issues or to jump to a predicate detail page for remediation guidance.

At the top of the page there is a Create new issue button. Use this to manually open an issue that is not produced by policy evaluation — for example, to record a task, an ad-hoc investigation, or an operational incident. The button opens the Create new issue page where you can set the title, description, severity, target instance, flags, and assignee. Manually created issues follow the same lifecycle as policy-generated issues and can be closed when the underlying problem is resolved.

Usage tips

  • Start with Open + Flagged to triage urgent problems.
  • Use the instance filter to hand off findings to owners or DBAs responsible for a specific host.
  • Click an issue to view details, suggested remediation, and evaluation history. Closed issues remain available for audit and trend analysis.

Export and settings

  • At the top-right of the page there is an export button that downloads the current list of issues as an Excel file. Use this to share findings with stakeholders who do not use the application or to perform offline analysis.
  • The settings (gear) button opens the Policies page. From there you can view and edit Policies and Predicates or adjust parameters and notification settings for each rule.

Usage tips

  • Start with Open + Flagged to triage urgent problems.
  • Use the instance filter to hand off findings to owners or DBAs responsible for a specific host.
  • Click an issue to view details, suggested remediation, and evaluation history. Closed issues remain available for audit and trend analysis.

List behavior and row layout

  • The issues list uses infinite scrolling to load more rows as you scroll.
  • Each row represents a single issue and contains:
    • Title: typically the predicate name for policy-generated issues. Click the title to open the Issue Detail page.
    • Rule detail: a short explanatory line describing the failing predicate (for example: “free_percent is 11, should be 20”).
    • Instance and object: the target instance and, when applicable, the database or object name referenced by the issue.
  • Right-side controls:
    • Created date: timestamp when the issue was opened.
    • Flag icon: toggle to mark the issue as important for follow-up.
  • Use the row checkbox to select issues for bulk actions (Select All / Unselect All and Close Selected).

10.1 - Issue Details

Details of an issue

This page shows full context and actions for a single issue.

Top: Title and description

  • Title: the issue title (often the predicate name for policy-generated issues).
  • Description: a short statement of the failing condition. Example: “free_percent is 11, should be 20”.

Metadata

  • Created: timestamp when the issue was opened.
  • Instance: the SQL Server instance the issue refers to.
  • Database / Object: the database or object name when applicable.

Explanation and remediation

  • A concise explanation describes why the issue matters and recommended remediation steps. Include practical suggestions (for example: clear old files, adjust backup/retention policies, increase disk capacity, or change maintenance windows) and links to relevant documentation or runbooks.

Metric chart

  • A time-series chart displays the metric evaluated by the predicate (for example, available disk space) over the selected interval. The chart shows values up to and including the point when the predicate was evaluated so you can see trend leading to the violation.

Policy evaluation details

  • “Show Policy Evaluation Details” reveals the full evaluation record for the predicate: input properties, threshold values, measured value, evaluation timestamp, policy name, predicate id, and any additional diagnostic fields.
  • Use this to verify the exact inputs that produced the issue.

Tags

  • The Tags control lists tags assigned to the issue and lets you add or remove tags for workflow or routing (for example: “ops”, “storage”, “urgent”).

Closing notes

  • Enter free-form notes describing how the issue was investigated or fixed.
  • Notes are stored with the issue for audit and postmortem purposes.

Actions

  • Save: persists tag changes and closing notes without changing issue state.
  • Close issue: manually closes the issue. If the underlying predicate still fails on a subsequent automated evaluation, a new issue will be opened.
  • Exclude: opens a dialog to disable this predicate for the specific instance/database/object/group/tag combination. The dialog also offers the option to close all matching issues for the selected scope. Use exclusions sparingly and document the rationale (ex. test instance).
  • Fix issue: when available, this button schedules an automated remediation job that executes the predefined T-SQL script associated with the predicate. Not all issues support automatic fixes — availability depends on the predicate and the remediation defined for it. The jobs UI shows the script to be run, required permissions, and the scheduled job status. Execution logs and results are recorded with the job.

Behavior and history

  • Closed issues remain available for historic review; use filters on the Issues list to include closed items.

10.2 - Policies

Policies

This page lists all Policies configured in the system.

Overview

  • Each row shows a single Policy with its name and a short description.
  • Click a Policy name to open the Predicates page, which lists every predicate (rule) contained in that Policy.

Enable / disable control

  • A toggle beside each Policy enables or disables the Policy and all of its predicates in one action.
  • Disabling a Policy prevents its predicates from being evaluated by the background engine and stops new issues from being created by those rules.
  • Existing open issues created by predicates in a disabled Policy remain visible and can be closed manually; they are not automatically removed.

Usage notes

  • Use the list to review which policy groups are active (for example, Diagnostic vs Performance) and to temporarily suspend evaluation during maintenance windows or policy tuning.
  • Click through to Predicates to adjust individual rules, or thresholds.

10.3 - Predicates

Predicates

This page lists all predicates defined for the selected Policy.

Predicates list

  • Each row represents a single predicate with concise metadata:
    • Name: predicate identifier (click to open predicate detail).
    • Enabled: toggle to enable or disable the predicate evaluation.
    • Notify: toggle to enable or disable notifications for this predicate.

Behavior

  • Disabling a predicate prevents the evaluation engine from running that predicate and stops new issues from being created by it.
  • Disabling notifications suppresses alerting for failures; issues are still created and tracked, but no notifications are sent for those events.

Controls and actions

  • Click the predicate name to go to the predicate details page
  • Use the inline toggles to quickly enable/disable predicates or notifications.

Usage tips

  • Use the Notifications toggle to mute noisy predicates while retaining the audit trail of issues.
  • Disable a predicate only when you are certain the check is irrelevant or will cause false positives; prefer tuning thresholds where possible.

Overrides and defaults

  • Predicates use shared default values for thresholds, enabled state, and other properties. All organizations inherit the same defaults but may override them as needed.
  • Overrides can be applied at two scopes:
    • Global: applies everywhere the predicate is used.
    • Scoped: applies to a specific combination of instance, database, object, group, or tag. For example, an override on a tag affects all instances that have that tag; an override on a specific SQL instance affects only that instance.
  • Changing any predicate property (including disabling it) counts as an override because the predicate’s effective configuration differs from the shared default.
  • Predicates that have overrides are shown beneath the original unmodified predicate. Overrides are highlighted (orange text) to distinguish them from the default predicate rows (green).
  • When an override is scoped to instance/database/object/group/tag, the override row displays the scope information so you can see exactly where the change applies.
  • To remove an override and revert to the default behavior, use the delete icon on the right of the override row. Deleting the override restores the predicate to the shared default values.

Exclude from evaluation via an issue

  • The “Exclude” action available on an Issue creates the same kind of override described above. When you open the Exclude dialog from an Issue you are setting the predicate’s enabled state to off for the scope you select.
  • The dialog presents checkboxes for scope selection: instance, database, object, group, and tag. Selecting one or more scopes creates a scoped override (enabled = off) that prevents the predicate from running for that specific combination.
  • Excluding from an Issue can optionally close all matching open issues for the selected scope; the override itself is recorded and shown under the predicate (highlighted in orange).
  • Exclusions are reversible: delete the override row to restore the default behavior, or edit the override to change its scope or enabled state.

10.4 - Predicate Details

Predicate Details

This page contains a form to view and edit all properties of a predicate, including scoped overrides and the evaluation expressions used by the background engine.

Scope controls

  • Server Tag: apply this predicate only to servers that have the selected tag.
  • Server Group: limit the predicate to servers belonging to a specific group.
  • Server: target a single SQL instance for this override.
  • Database: restrict the predicate to a particular database.
  • Object: restrict the predicate to a specific object (for example, a file, table, or index).
  • Use the scope controls to create a scoped override; leaving a field empty makes the override less specific (broader).

Editable expressions and fields

  • Evaluation Expression: the boolean expression the engine evaluates. The default is “actual = expected”, but you can change this to any valid expression supported by the engine (for example “actual < expected” or “actual > low AND actual < high”).
  • Expected Expression: the expected value or expression to compare against. This may be a constant (e.g., 20), a computed expression, or one of the available properties of the object being checked.
  • Actual Expression: the expression that yields the measured value to be evaluated (typically the name of a property from the underlying object, e.g., free_percent or file_size_gb).
  • Property Name: the logical name used to identify the property in UIs and reports. It maps the actual expression to a friendly identifier and can be used by other parts of the system to reference this value (for example in charts or exported data).
  • Filter Expression: an optional expression that narrows the set of objects the predicate applies to. Use it to include or exclude specific objects so the check only runs against matching items (for example: database_name = ‘X’ AND file_type = ’log’).

Behavior and helpers

  • Save applies the changes and creates or updates an override for the selected scope. Changing any property (including disabling) counts as an override.

11 - Queries

Queries

Content coming soon.

12 - Settings

Configure QMonitor

The Settings page lets you configure personal preferences and organization defaults that affect how QMonitor behaves for your user and the orgs you manage.

Account management

  • Manage your account: button at the top of the page that opens your Account page. Use the Account page to update your email, password, 2FA, and personal contact details.

Appearance

  • Theme: choose Light or Dark theme for the application. The selected theme is applied immediately after you click Save.
  • Time zone: select the time zone used to display timestamps in the UI for your user. Note: the time zone setting applies only to your user profile, not to the organization or other users.
  • Save applies appearance changes and persists them to your profile.

Organizations

  • Current organization: the name of the organization you are currently scoped to is shown near the top of the Organizations section.
  • Manage Organization: button opens the Manage Organization page where you can change org-level settings, members, and billing (if permitted by your role).
  • Organization actions:
    • Join organization: request to join an existing organization.
    • Create new organization: start a new organization and become its owner.
    • Switch organization: change your active organization context when you belong to multiple organizations.

Behavior and tips

  • Personal settings (theme and time zone) are stored per user and follow you across devices when you sign in.
  • Organization management actions depend on your role and permissions; some controls may be hidden if you lack admin privileges.
  • Use Save to persist changes; unsaved edits are not applied to your session.

Accessibility and support

  • Changing theme can improve readability and reduce eye strain for long sessions; use the Dark theme for low-light environments.
  • If you need assistance managing account or organization settings, contact your org administrator.

12.1 - Preferences

Preferences

12.2 - Manage Organization

Manage Organization

The Manage Organization page is where organization owners configure org-level settings, membership, billing, and branding. Access to this page is restricted to organization owners and administrators; non-admin users can view fewer options or receive an access denied message.

Left-hand tabs

  • General: edit the organization name and display name
  • Notifications: configure global notification channels
  • Billing Details: enter or update billing contact, billing address, and tax information used on invoices.
  • Licenses: view current license counts and usage, purchase additional seats, or renew expiring licenses.
  • Payment History: review past invoices, payments, and subscription activity.
  • Members: invite, remove, or change roles for organization members; manage pending invitations and role-based permissions.
  • Customize: upload organization logo and set branding options shown in the UI and on shared reports.

12.2.1 - General Options

General Options

Organization name and display name

  • Display Name: a human-friendly name shown in the UI and reports.
  • Actual Name: a machine-safe identifier used by tooling (ConfigWizard, ConfigWizardCmd, agents). The actual name is sanitized to remove or replace characters that could break scripts or client tools; avoid spaces and punctuation when choosing an organization name.

Default instance settings

  • “Acknowledge you are running jobs as a sysadmin”: when checked, new instances registered in this organization will have the same acknowledgement enabled by default. This setting simply controls the default for newly added instances and can be changed per-instance later.

Organization key

  • Regenerate organization key: use this to replace the org key if it is lost or compromised. Regenerating the key immediately invalidates the current key — all existing agents and integrations using the old key will stop communicating and must be reconfigured with the new key. Confirm this action only after planning for agent reconfiguration.

Delete organization

  • Delete organization: permanently removes the organization and all org- scoped data. This action is irreversible and requires explicit confirmation (type-to-confirm). Only organization owners can perform deletion.

12.2.2 - Notifications

Notifications

This page configures how your organization receives alerts from QMonitor. Use these settings to direct operational notifications to the right channels and reduce noise by choosing the preferred delivery methods.

Supported notification channels

  • Email: enter one or more email addresses to receive alerts for the organization. We recommend using a mailing list or group alias rather than individual addresses so notifications reach the right team without relying on a single person.

    • Note: Email can generate high volume and is not recommended for urgent operational alerts; use it primarily for summaries or low-severity items.
  • Teams: enter the worflow URL for a Microsoft Teams channel (a Teams workflow or connector URL). This posts notifications directly into a Teams channel and is the recommended option for real-time operational alerts.

    • Ask your Teams administrator if you need help creating a channel workflow or determining the correct URL format.

Configuration and tips

  • Set notification preferences per policy severity so only important events generate immediate alerts while lower-severity findings are batched or routed to quieter channels.
  • Test each configured endpoint after saving to confirm delivery.
  • Use mailing lists and dedicated channels to ensure alerts reach on-call staff and to avoid overloading personal inboxes.

Future channels

  • Additional notification integrations (Slack, webhooks with custom payloads, SMS, etc.) are under development and will be added in future releases.

12.2.3 - Billing Details

Billing Details

Billing information form

  • The Billing info section is a form where you enter your company’s billing details used to generate invoices. Typical fields include company name, billing address, VAT number / fiscal code, contact name, email, and phone.
  • Changing the country in the form will show or hide additional country- specific fields. For example, selecting Italy exposes “Certified Email Address (PEC)” and “SDI Code” fields required on Italian invoices.
  • The information you provide here is printed on invoices. Keep it accurate to avoid delays or invoice re-issues.

Purchase prerequisites and VAT rules

  • You must complete the billing form before purchasing licenses; the system will block checkout until required billing fields are provided.
  • EU reverse-charge VAT: if your company is in an EU country, reverse-charge treatment applies only when you supply the required identifiers (for example VAT Number and/or national fiscal code as applicable). If you do not provide the necessary information, VAT will be applied to the license price and included on the invoice.

Sanctions and restricted sales

  • We cannot sell licenses to entities or individuals located in countries sanctioned by the European Union or the Italian Republic. This includes, but is not limited to, Russia, Iran, and Afghanistan.
  • If your billing address or company registration is in a sanctioned jurisdiction, purchases will be declined.

Tips and support

  • Use official company identifiers (VAT or fiscal codes) to ensure correct tax treatment and avoid invoice corrections.
  • If you need assistance completing country-specific fields, contact your finance team or open a support ticket via the Help menu.

12.2.4 - Licenses

Licenses

The Licenses page lists all available licenses and provides actions to renew or manage them.

License list

  • Columns shown for each license:
    • Valid to: the license expiry date.
    • Name: a friendly name or a GUID that identifies the license.
    • Assigned to: the SQL instance the license is assigned to; blank if not assigned.
  • Each row has a checkbox so you can select one or more licenses for bulk actions.

Renewal and payment

  • Use the “Renew # licenses” button to proceed to the payment page and renew the selected licenses.
  • Payment is processed by Stripe; QMonitor does not store payment methods or credit card numbers.

Actions and notes

  • Select multiple rows to renew licenses in bulk.
  • Assigned/unassigned status is shown in the table; use the button at the top of the list to assign licenses

Assign vacant licenses

  • When you have multiple licenses available, use the “Assign licenses” button to automatically allocate vacant licenses to instances that do not currently have a license. The assignment process selects unassigned licenses and binds them to unlicensed instances until either all selected licenses are used or all instances are licensed.

12.2.5 - Payment History

Payment History

The Payment History page lists all payments and invoices for the organization. Use this view to review past charges, download invoices, and reconcile billing.

Payments table

  • Columns:
    • Date: payment or invoice date.
    • Paid: amount actually paid (currency).
    • Total: invoice total before/after taxes as shown on the invoice.
    • Invoice: link to the invoice document (PDF).

Actions and filters

  • Click an Invoice link to download or open the invoice PDF for accounting.

Notes

  • Payment method and transaction details are recorded with each entry.
  • For billing questions or disputes, contact billing support

12.2.6 - Members

Members

This page lets organization owners manage access and invitations.

Access and invitations

  • Initially only organization owners can access the Manage Organization pages.
  • Owners can invite new users with the “Invite” button at the top-right.
  • The invite dialog lets you enter an email address or copy an invitation code to share directly. The dialog shows the code immediately for manual distribution.
  • The invited user receives an email with the invitation link and code. If the user is not registered, they must sign up first and then redeem the invitation code or follow the link to join the organization.

Members list

  • The members table shows current organization users with simple controls:
    • Email: the member’s email address.
    • Role: a dropdown to change the member role (Owner or User).
    • Delete: remove the member from the organization.
  • Role changes and removals may require confirmation.

Behavior and tips

  • Use Owner role sparingly; owners can manage billing, policies, and members.
  • Prefer inviting users to a role of “User” and elevate to Owner only when necessary.
  • Invitations expire after a limited time; resend if a user reports an expired invite.

12.2.7 - Logo and Branding

Logo and Branding

Organization logo

  • Current logo: the page displays the organization’s current logo for both themes so you can verify what users see.
  • Theme-specific logos: you can upload a separate logo for Light and Dark themes to ensure good contrast and legibility in both modes.
  • Upload controls: choose a file for the Light logo and a file for the Dark logo. Supported formats: PNG and SVG. Recommended dimensions: provide a high-resolution square or horizontal asset; SVGs scale crisply for all sizes.

Preview and reset

  • Preview: the preview area shows how the selected logos and accent color will appear in the UI before you save changes.
  • Reset: click Reset to revert the logo for the current theme back to the product default. Reset does not affect the other theme’s logo unless you reset it as well.

13 - Jobs

Schedule Tasks with QMonitor

This page lists scheduled jobs and provides controls to create, filter, inspect, and clean up job records.

Top controls

  • Create new job: opens the New Job page to define a job and its schedule.
  • Filters:
    • Job status: All, Not run, Failed, Succeeded, Executed — use this to focus on the executions you care about.
    • Instance: restrict the list to a single SQL instance.
    • Job type: Manual jobs or Autofix jobs (automatic remediation).
  • Delete completed one-time jobs: removes completed jobs that had a one-time schedule to tidy the list and reclaim storage.

Jobs list

  • The list shows jobs matching the selected filters. Each row contains:
    • Job status icon: visual state of the latest execution
      • Green checkmark = completed successfully
      • Blue spinner = running/in progress
      • Red X = failed/error
    • Job type: indicates the action (Execute query or Execute command).
    • Name: job name; next to it (in smaller text) the schedule description (for example “at 02:00” or “On 2025-11-01 at 09:00”).
    • Delete: button to remove the job definition and its history.
    • Show log: opens the execution log for the job to inspect output, errors, and step details.

Row actions and navigation

  • Click the job name to open the Job Detail page to edit or view the job.
  • Use Show log to view recent runs, stderr/stdout, and execution status.
  • Use Delete to remove obsolete jobs.

Tips

  • Filter by Failed to quickly find jobs that need attention.
  • Use the Instance filter to hand off job issues to the responsible DBA.
  • Regularly remove one-time completed jobs to keep the job list manageable.

13.1 - Job Detail

Define and edit scheduled jobs

This page lets you review, create, and edit a scheduled job and its settings. Jobs can execute a T-SQL query against one or more instances or run an arbitrary command on one or more agents.

Job properties

  • Name: descriptive job name.
  • Type: choose the job type:
    • Execute Query — runs the specified SQL text against selected SQL instances.
    • Execute Command — runs the specified command on one or more agents.

Execute Query controls

  • Instance selection:
    • First dropdown: choose selection mode (Instance, Instance Group, or Tag).
    • Second dropdown: pick one Instance, Group, or Tag depending on the selection mode. Use groups or tags to target many instances.
  • Max Concurrent Instances: maximum parallel executions:
    • 1 = run sequentially (no parallelism).
    • 0 = run against all selected instances in parallel.
    • Choose a limit to avoid saturating CPU, I/O, or network (for example, limit concurrent backups to 5 to avoid disk flooding).
  • Retries: number of retry attempts on failure.
  • Retry delay (s): delay, in seconds, between retry attempts.

Execute Command controls

  • Agent target: choose All Agents or select a specific agent to run the command. Commands are executed by the agent process on the host running the selected agent(s).

Common controls (both job types)

  • Enabled: checkbox that enables or disables the job without deleting it.
  • Command: text area containing the T-SQL script or shell command to execute.
    • For Execute Query jobs, SQL text runs against the target instances.
    • For Execute Command jobs, the text is executed by the agent on the host.
  • Schedule: click “Edit Schedule” to expand schedule settings:
    • Type: Recurring or One-time.
    • Cron Expression: enter a cron expression to define recurring schedules.
      • Help: the Help button opens documentation that assists in crafting valid cron expressions.
    • End Date: optional date when the schedule stops running.

Actions (bottom toolbar)

  • Save: persist job definition and schedule.
  • Run Now: immediately queue the job for execution (bypasses the schedule).
  • Job Logs: open the job logs page to inspect past runs and execution output.
  • Cancel: discard unsaved changes and return to the Jobs list.

Notes and tips

  • Use small, targeted schedules during testing and run “Run Now” to validate behavior before enabling wide production schedules.
  • Restrict Max Concurrent Instances for heavy operations (backups, restores, large ETL) to prevent resource contention.
  • Commands executed by agents require appropriate agent permissions on the host
  • Sysadmin acknowledgement
    • If an agent connects to a SQL instance using sysadmin credentials, the job will only run if the instance definition includes the “Acknowledge you are running jobs as a sysadmin” checkbox. This guard prevents accidental execution of high-privilege operations on instances where explicit consent has not been given.
    • Toggle the acknowledgement on the Instance Details page. Jobs that require sysadmin rights will display a warning if the acknowledgement is not enabled for the target instance.

13.2 - Job History

History of executed jobs

The Job History page displays past executions of scheduled jobs and their status so you can inspect runs, troubleshoot failures, and audit activity.

History table

  • Columns:
    • Date and time: when the execution started (or completed).
    • Status: running, succeeded, or error (iconized for quick scanning).
    • Agent name: the agent that performed the execution.
  • The list is ordered by date (newest first) and supports infinite scroll

Row details

  • Click a row to expand detailed execution information:
    • A chronological list of log entries and messages produced during the run.
    • Standard output and standard error snippets (when available).
    • Execution duration, exit code, and retry attempts.
  • Use expanded details to diagnose failures, identify error messages, and locate the exact step that failed.