Compare commits

..

4 Commits

Author SHA1 Message Date
github-actions[bot]
f218058c89 chore: update versions (#2869)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @nhost/dashboard@1.28.1

### Patch Changes

-   9735fa2: chore: remove broken link

## @nhost/docs@2.17.1

### Patch Changes

-   db2f44d: fix: update rate-limit to reflect reality
- dda0c67: chore: udpate metrics documentation with managed
configuration

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-09-18 11:40:25 +01:00
David Barroso
dda0c67fa4 chore (docs): udpate metrics documentation with managed configuration (#2871)
### **PR Type**
Enhancement, Documentation


___

### **Description**
- Enhanced metrics documentation with detailed information on Grafana
configuration, contact points, SMTP settings, and alerting.
- Added new configuration files for Grafana, including setup for
datasources, dashboards, contact points, and alerting rules.
- Updated existing dashboard configurations to use the "nhost"
datasource and improve legend formatting.
- Introduced a setup script to automate Grafana configuration
generation.
- Restructured documentation navigation for better organization of
metrics-related content.
- Added README with instructions for contributing new Grafana
dashboards.
- Implemented comprehensive alerting rules for various system metrics
and error conditions.


___



### **Changes walkthrough** 📝
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Configuration
changes</strong></td><td><details><summary>11 files</summary><table>
<tr>
  <td>
    <details>
<summary><strong>setup_config.sh</strong><dd><code>Add Grafana
configuration setup script</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/setup_config.sh

<li>New script to set up Grafana configuration<br> <li> Creates
datasources directory<br> <li> Retrieves token and app ID<br> <li>
Generates datasources.yaml file<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-020b65994838bf8f896b973c08c3d1f32fb26df56981eee8feec396adddc0fa6">+11/-0</a>&nbsp;
&nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>contact_points.yaml</strong><dd><code>Add Grafana
contact points configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; </dd></summary>
<hr>

observability/contact_points.yaml

<li>New file for configuring Grafana contact points<br> <li> Includes
settings for email, PagerDuty, Discord, Slack, and webhook
<br>notifications<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-9b849dd13ecd160bb71d0dbd99677bbc8cd455950a49d2a2c5e0faa12d84de62">+58/-0</a>&nbsp;
&nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>

<summary><strong>dashboard_functions_metrics.json</strong><dd><code>Update
Functions dashboard configuration</code>&nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/dashboard_functions_metrics.json

<li>Updated datasource UID from "prometheus" to "nhost"<br> <li>
Modified legend format to use print statements<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-43ed0168e8291fdeb852ae00ee52f832e6e54b38d7957ad59fb3f3d2bcfa9bb0">+41/-39</a>&nbsp;
</td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>dashboard_graphql.json</strong><dd><code>Update GraphQL
dashboard configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/dashboard_graphql.json

<li>Updated datasource UID from "prometheus" to "nhost"<br> <li>
Modified legend format to use print statements<br> <li> Added "nhost"
tag to dashboard<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-da87c05a307528ead905fc17fb6d75eb31b44769d06714c66233f489cbdbb1f2">+24/-22</a>&nbsp;
</td>

</tr>                    

<tr>
  <td>
    <details>

<summary><strong>dashboard_ingress_metrics.json</strong><dd><code>Update
Ingress Metrics dashboard configuration</code>&nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/dashboard_ingress_metrics.json

<li>Removed __inputs, __elements, and __requires sections<br> <li>
Updated datasource UID from "prometheus" to "nhost"<br> <li> Modified
legend format to use print statements<br> <li> Added "nhost" tag to
dashboard<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-de5c4d7cc3aa858822d9243161b50924d04e290013eb4a738f19bc07a79b1ed7">+25/-54</a>&nbsp;
</td>

</tr>                    

<tr>
  <td>
    <details>

<summary><strong>dashboard_project_metrics.json</strong><dd><code>Update
Project Metrics dashboard configuration</code>&nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/dashboard_project_metrics.json

<li>Removed __inputs, __elements, and __requires sections<br> <li>
Updated datasource UID from "${DS_PROMETHEUS}" to "nhost"<br> <li>
Modified legend format to use print statements<br> <li> Updated
schemaVersion and removed templating list<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-18cfde5980fad509ab3a14485f1ce3e7f89540854f30a9d932a630c7003065f6">+98/-157</a></td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>dashboards_providers.yaml</strong><dd><code>Add Grafana
dashboard providers configuration</code>&nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/dashboards_providers.yaml

<li>New file for configuring Grafana dashboard providers<br> <li> Sets
up file-based dashboard provisioning<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-c6d162b6c4666e2ee121c37b0d5cffc1b760ed68446e65d5556f083e241765b9">+10/-0</a>&nbsp;
&nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>datasources.yaml.tmpl</strong><dd><code>Add Grafana
datasource configuration template</code>&nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/datasources.yaml.tmpl

<li>New template file for Grafana datasource configuration<br> <li> Sets
up Prometheus datasource with custom query parameters and
<br>authorization<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-883ff1a7b6c26d41604cfb7fbe7444c568f9379d408a140f0e88047b1768468e">+17/-0</a>&nbsp;
&nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>grafana.ini</strong><dd><code>Add Grafana main
configuration file</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/grafana.ini

<li>New configuration file for Grafana<br> <li> Includes settings for
analytics, logging, paths, and server<br> <li> Conditional SMTP
configuration<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-69effef5d34dd2b15f66a1ff7eb524de80e14e82b6ffd63ce3a9cf84fcfa2128">+23/-0</a>&nbsp;
&nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>notification_policies.yaml</strong><dd><code>Add
Grafana notification policies configuration</code>&nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/notification_policies.yaml

<li>New file for configuring Grafana notification policies<br> <li> Sets
up a default policy for the "Nhost Managed Contacts" receiver<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-15f73217844e330f8cbf0b98becf9ba1712ded93168ec52ffd10ad7af58326e9">+7/-0</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>rules_nhost.yaml</strong><dd><code>Add Grafana alerting
rules configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; </dd></summary>
<hr>

observability/rules_nhost.yaml

<li>New file for configuring Grafana alerting rules<br> <li> Includes
rules for high CPU usage, low disk space, low memory, OOM <br>kills, and
high error rates<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-2be0d3f7ec2e1a61cf05bbac1c46b6e14a822af4797d27d2c0caaa5205de88ec">+369/-0</a>&nbsp;
</td>

</tr>                    

</table></details></td></tr><tr><td><strong>Documentation</strong></td><td><details><summary>3
files</summary><table>
<tr>
  <td>
    <details>
<summary><strong>mint.json</strong><dd><code>Restructure documentation
navigation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

docs/mint.json

<li>Removed nested "Monitoring" group<br> <li> Moved "platform/metrics"
to main "Platform" group<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-c91a604899dfef4b2494c317f4fd39a7f22b79986095f580399347293d534deb">+1/-5</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>metrics.mdx</strong><dd><code>Enhance metrics
documentation with configuration details</code>&nbsp; </dd></summary>
<hr>

docs/platform/metrics.mdx

<li>Added info about Pro/Team/Enterprise feature<br> <li> Expanded
sections on accessing and configuring Grafana<br> <li> Added details
about contact points, SMTP, and alerting configuration<br> <li> Included
information about advanced configuration options<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-433c57c7c7811809819b3683a23368324a93a9eac7a4ab121b54d16414452f6d">+124/-6</a>&nbsp;
</td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>README.md</strong><dd><code>Add README for Grafana
dashboard contributions</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

observability/grafana/README.md

<li>New README file with instructions for contributing dashboards<br>
<li> Outlines steps to export and save dashboard files<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2871/files#diff-83ce6f1e076f43acbdcb8cfeac5c2caa0a1d87116c25c1cb063ae0b10b7b6885">+9/-1</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    
</table></details></td></tr></tr></tbody></table>

___

> 💡 **PR-Agent usage**:
>Comment `/help` on the PR to get a list of all available PR-Agent tools
and their descriptions
2024-09-17 09:43:44 +02:00
David Barroso
db2f44d7c0 fix (docs): update rate-limit to reflect reality (#2870)
### **PR Type**
Documentation


___

### **Description**
- Updated the rate limit for email sending endpoints in the
documentation
- Changed the limit from 50 per hour to 10 per hour for projects without
custom SMTP settings
- This change reflects the actual rate limit implemented in the system
- No other changes were made to the rate limits table or surrounding
text


___



### **Changes walkthrough** 📝
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Documentation</strong></td><td><table>
<tr>
  <td>
    <details>
<summary><strong>rate-limits.mdx</strong><dd><code>Update email rate
limit in documentation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; </dd></summary>
<hr>

docs/platform/rate-limits.mdx

<li>Updated the rate limit for email sending endpoints from 50/hour to
<br>10/hour for projects without custom SMTP settings<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2870/files#diff-d6c7ab75a347c1217107fdcf841312df268708bc7d319f528ea67c7280f00284">+1/-1</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    
</table></td></tr></tr></tbody></table>

___

> 💡 **PR-Agent usage**:
>Comment `/help` on the PR to get a list of all available PR-Agent tools
and their descriptions

---------

Co-authored-by: Hassan Ben Jobrane <hsanbenjobrane@gmail.com>
2024-09-13 12:19:08 +02:00
David Barroso
9735fa238b chore (dashboard): remove broken link (#2868)
### **PR Type**
Enhancement, Documentation


___

### **Description**
- Removed a broken "Learn More" link from the DataBrowserSidebar
component in the dashboard
- Added a changeset file to document the removal of the broken link
- Introduced a new GitHub Actions workflow for AI-powered pull request
reviews
- The new workflow uses the PR Agent action with specific configurations
for OpenAI and Anthropic models
- Updated the project structure to improve documentation and automate
code review processes


___



### **Changes walkthrough** 📝
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><table>
<tr>
  <td>
    <details>
<summary><strong>DataBrowserSidebar.tsx</strong><dd><code>Remove "Learn
More" link from DataBrowserSidebar</code>&nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>


dashboard/src/features/database/dataGrid/components/DataBrowserSidebar/DataBrowserSidebar.tsx

<li>Removed a "Learn More" link with an arrow icon<br> <li> The link was
pointing to GitHub integration documentation<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2868/files#diff-6c0c7b86959eb51f0ef884074e8a72725ee505a5759ca4a95126e96f26062e3b">+0/-9</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    

<tr>
  <td>
    <details>
<summary><strong>gen_ai_review.yaml</strong><dd><code>Add AI-powered PR
review workflow</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

.github/workflows/gen_ai_review.yaml

<li>Added a new GitHub Actions workflow for AI-powered PR reviews<br>
<li> Configures the PR Agent action with specific settings and
secrets<br> <li> Sets up triggers for pull request events and issue
comments<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2868/files#diff-d1e4c772e0acb5ce4891df2dd94ba58ffaf6393e8f75493ec7e10cbce1c4992c">+28/-0</a>&nbsp;
&nbsp; </td>

</tr>                    
</table></td></tr><tr><td><strong>Documentation</strong></td><td><table>
<tr>
  <td>
    <details>
<summary><strong>tricky-colts-beg.md</strong><dd><code>Add changeset for
broken link removal</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary>
<hr>

.changeset/tricky-colts-beg.md

<li>Added a new changeset file for @nhost/dashboard<br> <li> Describes
the change as removing a broken link<br>


</details>


  </td>
<td><a
href="https://github.com/nhost/nhost/pull/2868/files#diff-6564a7547695ab3d9be88cc4977a814f3123f60b2bb10effeb8904997710a950">+5/-0</a>&nbsp;
&nbsp; &nbsp; </td>

</tr>                    
</table></td></tr></tr></tbody></table>

___

> 💡 **PR-Agent usage**:
>Comment `/help` on the PR to get a list of all available PR-Agent tools
and their descriptions

---------

Co-authored-by: Hassan Ben Jobrane <hsanbenjobrane@gmail.com>
2024-09-11 19:09:10 +02:00
28 changed files with 955 additions and 381 deletions

28
.github/workflows/gen_ai_review.yaml vendored Normal file
View File

@@ -0,0 +1,28 @@
---
name: "gen: AI review"
on:
pull_request:
types: [opened, reopened, ready_for_review]
issue_comment:
jobs:
pr_agent_job:
if: ${{ github.event.sender.type != 'Bot' }}
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
issues: write
pull-requests: write
contents: write
name: Run pr agent on every pull request, respond to user comments
steps:
- name: PR Agent action step
id: pragent
uses: Codium-ai/pr-agent@v0.24
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
config.max_model_tokens: 100000
config.model: "anthropic/claude-3-5-sonnet-20240620"
config.model_turbo: "anthropic/claude-3-5-sonnet-20240620"
ignore.glob: "['pnpm-lock.yaml','**/pnpm-lock.yaml']"

View File

@@ -2,5 +2,5 @@
// $schema provides code completion hints to IDEs.
"$schema": "https://github.com/IBM/audit-ci/raw/main/docs/schema.json",
"moderate": true,
"allowlist": ["vue-template-compiler", "micromatch"]
"allowlist": ["vue-template-compiler", "micromatch", "path-to-regexp"]
}

View File

@@ -1,5 +1,11 @@
# @nhost/dashboard
## 1.28.1
### Patch Changes
- 9735fa2: chore: remove broken link
## 1.28.0
### Minor Changes

View File

@@ -1,6 +1,6 @@
{
"name": "@nhost/dashboard",
"version": "1.28.0",
"version": "1.28.1",
"private": true,
"scripts": {
"preinstall": "npx only-allow pnpm",

View File

@@ -12,7 +12,6 @@ import { Chip } from '@/components/ui/v2/Chip';
import { Divider } from '@/components/ui/v2/Divider';
import { Dropdown } from '@/components/ui/v2/Dropdown';
import { IconButton } from '@/components/ui/v2/IconButton';
import { ArrowRightIcon } from '@/components/ui/v2/icons/ArrowRightIcon';
import { DotsHorizontalIcon } from '@/components/ui/v2/icons/DotsHorizontalIcon';
import { LockIcon } from '@/components/ui/v2/icons/LockIcon';
import { PencilIcon } from '@/components/ui/v2/icons/PencilIcon';
@@ -20,7 +19,6 @@ import { PlusIcon } from '@/components/ui/v2/icons/PlusIcon';
import { TerminalIcon } from '@/components/ui/v2/icons/TerminalIcon';
import { TrashIcon } from '@/components/ui/v2/icons/TrashIcon';
import { UsersIcon } from '@/components/ui/v2/icons/UsersIcon';
import { Link } from '@/components/ui/v2/Link';
import { List } from '@/components/ui/v2/List';
import { ListItem } from '@/components/ui/v2/ListItem';
import { Option } from '@/components/ui/v2/Option';
@@ -312,15 +310,6 @@ function DataBrowserSidebarContent({
Your project is connected to GitHub. Please use the CLI to make
schema changes.
</Text>
<Link
href="https://docs.nhost.io/platform/github-integration"
target="_blank"
rel="noopener noreferrer"
underline="hover"
className="grid items-center justify-start grid-flow-col gap-1"
>
Learn More <ArrowRightIcon />
</Link>
</Box>
)}
{!isSelectedSchemaLocked && (

View File

@@ -1,5 +1,12 @@
# @nhost/docs
## 2.17.1
### Patch Changes
- db2f44d: fix: update rate-limit to reflect reality
- dda0c67: chore: udpate metrics documentation with managed configuration
## 2.17.0
### Minor Changes

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 255 KiB

View File

@@ -76,11 +76,7 @@
"platform/subdomain",
"platform/compute-resources",
"platform/service-replicas",
{
"group": "Monitoring",
"icon": "monitor-waveform",
"pages": ["platform/metrics"]
},
"platform/metrics",
"platform/environment-variables",
"platform/secrets",
"platform/deployments",

View File

@@ -1,6 +1,6 @@
{
"name": "@nhost/docs",
"version": "2.17.0",
"version": "2.17.1",
"private": true,
"scripts": {
"start": "mintlify dev"

View File

@@ -4,6 +4,10 @@ description: 'Grafana Instance configured and tailored to your project'
icon: monitor-waveform
---
<Info>
This is a Pro/Team/Enterprise feature. This is not available on Starter projects.
</Info>
Insights such as response times, resource usage, and error rates, to help you assess the **performance** and **health** of your services.
Metrics helps you analyze the performance of your infrastructure, while identifying bottlenecks and optimizing your applications.
@@ -23,16 +27,130 @@ Your Grafana instance comes pre-defined with dashboards that cover backend servi
![Grafana](/images/platform/metrics/grafana.png)
### Nhost Dashboard
## Accessing Grafana
You can find the link to Grafana in your project's dashboard, under **Metrics**.
You can find the link to Grafana in your project's dashboard, under **Metrics**.
![Project Metrics](/images/platform/metrics/nhost-dashboard-metrics.png)
## Configuring Grafana
Grafana comes pre-configured with a datasource with your project's metrics plus a few useful dashboards to observe your projects. In addition, you can enable alerting by configuring one or more contact points and enabling alerts in your configuration file.
<Info>
The configuration below is open source and can be found [here](https://github.com/nhost/nhost/tree/main/observability/grafana). If you want to see improvements, more rules, better dashboards, more options, etc., don't hesitate to contribute them or open an issue.
</Info>
### Configure contact points
Contact points in Grafana are lists of integrations that send notifications to specific channels or services when alerts are triggered. Supported contact points are:
- email
- pagerduty
- discord
- slack
- webhooks
To configure them include one ore more sections in your configuration file:
```toml
[observability.grafana.contacts]
emails = ['engineering@acme.com']
[[observability.grafana.contacts.pagerduty]]
integrationKey = 'integration-key'
severity = 'critical'
class = 'infra'
component = 'backend'
group = 'group'
[[observability.grafana.contacts.discord]]
url = 'https://discord.com/api/webhooks/...'
avatarUrl = 'https://discord.com/api/avatar/...'
[[observability.grafana.contacts.slack]]
recipient = 'recipient'
token = 'token'
username = 'username'
iconEmoji = 'danger'
iconURL = 'https://...'
mentionUsers = ['user1', 'user2']
mentionGroups = ['group1', 'group2']
mentionChannel = 'channel'
url = 'https://slack.com/api/webhooks/...'
endpointURL = 'https://slack.com/api/endpoint/...'
[[observability.grafana.contacts.webhook]]
url = 'https://webhook.example.com'
httpMethod = 'POST'
username = 'user'
password = 'password'
authorizationScheme = 'Bearer'
authorizationCredentials = 'token'
maxAlerts = 10
```
Once you have added them to your configuration and deployed them you should be able to see them in your grafana dashboard under "Settings" -> "Contact points" -> "Nhost Managed Contacts":
![contact points](/images/platform/metrics/contact_points.png)
If you click on "View" you should be able to see a test button you can use to ensure your contacts are properly configured.
### SMTP
If you are planning to send emails as part of your alerting, you need to configure some SMTP settings as well. To do so add to your configuration:
```toml
[observability.grafana.smtp]
host = 'localhost'
port = 25
sender = 'admin@localhost'
user = 'smtpUser'
password = 'smtpPassword'
```
### Alerting
To enable alerting simply add to your configuration:
```toml
[observability.grafana.alerting]
enabled = true
```
This will enable the following rules, which you can find in your grafana dashboard under "Alert rules":
![alert rules](/images/platform/metrics/alert_rules.png)
1. **High CPU usage**
- Trigger: CPU usage > 75%
- Duration: Sustained for 5-10 minutes
2. **Low disk space**
- Trigger: Disk utilization > 75%
- Duration: Persistent for 5-10 minutes
3. **Low free memory**
- Trigger: Memory usage > 75%
- Duration: Continuous for 5-10 minutes
4. **Service restarted due to lack of memory**
- Trigger: Any service restart due to memory exhaustion
- Duration: Immediate upon occurrence
5. **High request error rate**
- Trigger: Request error rate > 25%
- Duration: Maintained for 5-10 minutes
After they have been enabling they will start notifying your contact points when the conditions are met. For instance, here is an email sent due to a high error rate:
![email_notification](/images/platform/metrics/email_notification.png)
## Advanced configuration
In addition, Team and Enterprise projects can perform any changes they want. For instance you can add users, configure an OAuth provider for user authentication, add datasources, you can configure your own alerts, etc.
## Beta
Metrics is in beta, its functionality and pricing might change.
### Limitations
- Dashboards can be updated or created, but they won't persist after a deployment.

View File

@@ -45,7 +45,7 @@ Given that not all endpoints are equally sensitive, Auth supports more complex r
| Endpoints | Key | Limits | Description | Minimum version |
| ----------------------|-----|--------|-------------|-----------------|
| Any that sends emails<sup>1</sup> | Global | 50 / hour | Not configurable. This limit applies to any project without custom SMTP settings | 0.33.0 |
| Any that sends emails<sup>1</sup> | Global | 10 / hour | Not configurable. This limit applies to any project without custom SMTP settings | 0.33.0 |
| Any that sends emails<sup>1</sup> | Client IP | 10 / hour | Configurable. This limit applies to any project with custom SMTP settings and is configurable | 0.33.0 |
| Any that sends SMS<sup>2</sup> | Client IP | 10 / hour | Configurable. | 0.33.0 |
| Any endpoint that an attacker may try to brute-force. This includes sign-in and verify endpoints<sup>3</sup> | Client IP | 10 / 5 minutes | Configurable | 0.33.0 |

View File

@@ -10,7 +10,7 @@
"license": "ISC",
"devDependencies": {
"@types/express": "^4.17.21",
"express": "^4.19.2",
"express": "^4.20.0",
"typescript": "^4.9.5"
},
"dependencies": {

View File

@@ -0,0 +1,58 @@
apiVersion: 1
contactPoints:
- orgId: 1
name: Nhost Managed Contacts
receivers:
{{ if .Contacts.Emails }}
- uid: 1
type: email
settings:
addresses: {{ join .Contacts.Emails "," }}
singleEmail: false
sendReminder: true
{{ end }}
{{- range $i, $c := .Contacts.Pagerduty }}
- uid: {{ add 100 $i }}
type: pagerduty
settings:
integrationKey: {{ $c.IntegrationKey }}
severity: {{ $c.Severity }}
class: {{ $c.Class }}
component: {{ $c.Component }}
group: {{ $c.Group }}
{{- end }}
{{- range $i, $c := .Contacts.Discord }}
- uid: {{ add 200 $i }}
type: discord
settings:
url: {{ $c.URL }}
avatar_url: {{ $c.AvatarURL }}
use_discord_username: true
{{- end }}
{{- range $i, $c := .Contacts.Slack }}
- uid: {{ add 300 $i }}
type: slack
settings:
recipient: {{ $c.Recipient }}
token: {{ $c.Token }}
username: {{ $c.Username }}
icon_emoji: {{ $c.IconEmoji }}
icon_url: {{ $c.IconURL }}
mentionUsers: {{ join $c.MentionUsers "," }}
mentionGroups: {{ join $c.MentionGroups "," }}
mentionChannel: {{ $c.MentionChannel }}
url: {{ $c.URL }}
endpointUrl: {{ $c.EndpointURL }}
{{- end }}
{{- range $i, $c := .Contacts.Webhook }}
- uid: {{ add 400 $i }}
type: webhook
settings:
url: {{ $c.URL }}
httpMethod: {{ $c.HTTPMethod }}
username: {{ $c.Username }}
password: {{ $c.Password }}
authorization_scheme: {{ $c.AuthorizationScheme }}
authorization_credentials: {{ $c.AuthorizationCredentials }}
maxAlerts: '{{ $c.MaxAlerts }}'
{{- end }}

View File

@@ -44,7 +44,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -91,7 +91,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -108,7 +108,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -155,7 +155,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -172,7 +172,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -219,7 +219,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -249,7 +249,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of invocations by method/function",
"fieldConfig": {
@@ -327,13 +327,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(method, route) (increase(functions_requests_total{method=~\"$method\",route=~\"$route\"}[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{method}} {{route}}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -344,7 +344,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of invocations by status response",
"fieldConfig": {
@@ -422,13 +422,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(status) (increase(functions_requests_total{method=~\"$method\",route=~\"$route\"}[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{status}}",
"legendFormat": "{{ print "{{status}}" }}",
"range": true,
"refId": "A"
}
@@ -439,7 +439,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "",
"fieldConfig": {
@@ -518,12 +518,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(method, route) (increase(functions_bytes_sent{method=~\"$method\", route=~\"$route\"}[$__rate_interval])) / sum by(method, route) (increase(functions_requests_total{method=~\"$method\", route=~\"$route\"}[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -534,7 +534,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -601,7 +601,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -609,7 +609,7 @@
"format": "table",
"instant": true,
"interval": "2m",
"legendFormat": "{{method}} {{route}}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": false,
"refId": "A"
}
@@ -634,7 +634,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Time the slowest response took",
"fieldConfig": {
@@ -712,12 +712,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "histogram_quantile(1, rate(functions_duration_seconds_bucket[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -728,7 +728,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "The 95th percentile of response times refers to the value below which 95% of response times fall. In other words, it is the point at which only 5% of response times are higher",
"fieldConfig": {
@@ -806,12 +806,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "histogram_quantile(0.95, rate(functions_duration_seconds_bucket[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -822,7 +822,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "The 75th percentile of response times refers to the value below which 75% of response times fall. In other words, it is the point at which 25% of response times are higher",
"fieldConfig": {
@@ -900,12 +900,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "histogram_quantile(0.75, rate(functions_duration_seconds_bucket[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -916,7 +916,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "",
"fieldConfig": {
@@ -994,12 +994,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "builder",
"expr": "sum by(method, route) (increase(functions_duration_seconds_sum{method=~\"$method\", route=~\"$route\"}[$__rate_interval])) / sum by(method, route) (increase(functions_duration_seconds_count{method=~\"$method\", route=~\"$route\"}[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -1023,7 +1023,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of invocations that failed divided by the total number of invocations",
"fieldConfig": {
@@ -1101,13 +1101,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(method, route) (increase(functions_requests_total{method=~\"$method\",route=~\"$route\",status=~\"^[4-5].*\"}[$__rate_interval])) / sum by(method, route) (increase(functions_requests_total[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{method}} {{ route }}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": true,
"refId": "A"
}
@@ -1118,7 +1118,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -1184,7 +1184,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -1192,7 +1192,7 @@
"format": "table",
"instant": true,
"interval": "2m",
"legendFormat": "{{method}} {{route}}",
"legendFormat": "{{ print "{{ method }} - {{ route }}" }}",
"range": false,
"refId": "A"
}
@@ -1205,7 +1205,9 @@
"refresh": false,
"schemaVersion": 37,
"style": "dark",
"tags": [],
"tags": [
"nhost"
],
"templating": {
"list": [
{
@@ -1217,7 +1219,7 @@
},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"definition": "label_values(functions_requests_total, method)",
"hide": 0,
@@ -1244,7 +1246,7 @@
},
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"definition": "label_values(functions_requests_total, route)",
"hide": 0,

View File

@@ -31,7 +31,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "",
"gridPos": {
@@ -67,7 +67,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -144,12 +144,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(container,pod) (irate(container_cpu_usage_seconds_total{container=~\"hasura|hasura-graphi\"}[$__rate_interval])) * 1000",
"interval": "2m",
"legendFormat": "{{pod}}::{{container}}",
"legendFormat": "{{ print "{{pod}}::{{container}}" }}",
"range": true,
"refId": "A"
}
@@ -160,7 +160,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -238,12 +238,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(container,pod) (container_memory_usage_bytes{container=~\"hasura|hasura-graphi\"})",
"interval": "2m",
"legendFormat": "{{pod}}::{{container}}",
"legendFormat": "{{ print "{{pod}}::{{container}}" }}",
"range": true,
"refId": "A"
}
@@ -265,7 +265,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -342,7 +342,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum(rate(graphql_requests_total{service=\"hasura-service\"}[$__rate_interval]))",
@@ -354,13 +354,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "rate(graphql_requests_total{service=\"hasura-service\"}[$__rate_interval])",
"hide": false,
"interval": "2m",
"legendFormat": "{{operation}}::{{name}}::{{field}}",
"legendFormat": "{{ print "{{operation}}::{{name}}::{{field}}" }}",
"range": true,
"refId": "B"
}
@@ -371,7 +371,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -448,7 +448,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "graphql_websocket_connections_started_total{service=\"hasura-service\"} - graphql_websocket_connections_completed_total{service=\"hasura-service\"}",
@@ -460,7 +460,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "rate(graphql_websocket_connections_started_total{service=\"hasura-service\"}[$__rate_interval])",
@@ -473,7 +473,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "rate(graphql_websocket_connections_completed_total{service=\"hasura-service\"}[$__rate_interval])",
@@ -489,7 +489,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -567,12 +567,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "histogram_quantile(0.95, sum(rate(graphql_requests_duration_miliseconds_bucket{service=\"hasura-service\"}[$__rate_interval])) by (le,operation,name,field))",
"interval": "2m",
"legendFormat": "{{operation}}::{{ name }}::{{field}}",
"legendFormat": "{{ print "{{operation}}::{{ name }}::{{field}}" }}",
"range": true,
"refId": "A"
}
@@ -583,7 +583,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -660,12 +660,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "rate(graphql_requests_total{service=\"hasura-service\", result=\"failure\"}[$__rate_interval])",
"interval": "2m",
"legendFormat": "{{operation}}::{{ name }}::{{field}}",
"legendFormat": "{{ print "{{operation}}::{{ name }}::{{field}}" }}",
"range": true,
"refId": "A"
}
@@ -676,7 +676,9 @@
],
"schemaVersion": 37,
"style": "dark",
"tags": [],
"tags": [
"nhost"
],
"templating": {
"list": []
},

View File

@@ -1,35 +1,4 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "9.2.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
@@ -75,7 +44,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of requests by method/function",
"fieldConfig": {
@@ -153,13 +122,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(method, ingress) (increase(nginx_ingress_controller_requests{method=~\"$method\",ingress=~\"$ingress\"}[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{method}} {{ingress}}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": true,
"refId": "A"
}
@@ -170,7 +139,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of requests by status response",
"fieldConfig": {
@@ -248,13 +217,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(status) (increase(nginx_ingress_controller_requests{method=~\"$method\",ingress=~\"$ingress\"}[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{status}}",
"legendFormat": "{{ print "{{status}}" }}",
"range": true,
"refId": "A"
}
@@ -265,7 +234,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "",
"fieldConfig": {
@@ -344,12 +313,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(ingress, method) (increase(nginx_ingress_controller_response_size_sum{ingress=~\"$ingress\",method=~\"$method\"}[$__rate_interval])) / sum by(ingress, method) (increase(nginx_ingress_controller_requests{ingress=~\"$ingress\",method=~\"$method\"}[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ method }} - {{ ingress }}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": true,
"refId": "A"
}
@@ -360,7 +329,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -427,7 +396,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -435,7 +404,7 @@
"format": "table",
"instant": true,
"interval": "2m",
"legendFormat": "{{ingress}} {{method}}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": false,
"refId": "A"
}
@@ -460,7 +429,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "",
"fieldConfig": {
@@ -539,12 +508,12 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(method, ingress) (increase(nginx_ingress_controller_response_duration_seconds_sum{method=~\"$method\", ingress=~\"$ingress\"}[$__rate_interval])) / sum by(method, ingress) (increase(nginx_ingress_controller_response_duration_seconds_count{method=~\"$method\", ingress=~\"$ingress\"}[$__rate_interval]))",
"interval": "2m",
"legendFormat": "{{ ingress }} - {{ method }}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": true,
"refId": "A"
}
@@ -568,7 +537,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"description": "Number of requests that failed divided by the total number of requests",
"fieldConfig": {
@@ -647,13 +616,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(ingress,method) (increase(nginx_ingress_controller_requests{ingress=~\"$ingress\",method=~\"$method\",status=~\"^[4-5].*\"}[$__rate_interval])) / sum by(ingress, method) (increase(nginx_ingress_controller_requests[$__rate_interval]))",
"format": "time_series",
"interval": "2m",
"legendFormat": "{{method}} {{ ingress }}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": true,
"refId": "A"
}
@@ -664,7 +633,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -731,7 +700,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
"uid": "nhost"
},
"editorMode": "code",
"exemplar": false,
@@ -739,7 +708,7 @@
"format": "table",
"instant": true,
"interval": "2m",
"legendFormat": "{{ingress}} {{method}}",
"legendFormat": "{{ print "{{method}} {{ingress}}" }}",
"range": false,
"refId": "A"
}
@@ -751,7 +720,9 @@
],
"schemaVersion": 37,
"style": "dark",
"tags": [],
"tags": [
"nhost"
],
"templating": {
"list": [
{
@@ -802,4 +773,4 @@
"uid": "WOWEHb7Sz",
"version": 16,
"weekStart": ""
}
}

View File

@@ -1,41 +1,4 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "9.2.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "text",
"name": "Text",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
@@ -62,7 +25,6 @@
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": null,
"links": [],
"liveNow": false,
"panels": [
@@ -82,7 +44,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"gridPos": {
"h": 11,
@@ -100,13 +62,13 @@
"content": "This dashboard shows the overall resources used by every service in your project.\n\nMetrics service is currently in **beta** so things may change.\n\nKeep in mind that while you might be change settings, edit the dashboard or even create new ones these changes are not persisted. If you want to have different settings or even your own dashboards, please, contact us as we are looking for use cases to build the feature.\n\nDocumentation about our platform:\n\n- [Compute Resources](https://docs.nhost.io/platform/compute)\n- [Service Replicas](https://docs.nhost.io/platform/service-replicas)",
"mode": "markdown"
},
"pluginVersion": "9.2.0",
"pluginVersion": "11.2.0",
"type": "text"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"gridPos": {
"h": 11,
@@ -124,7 +86,7 @@
"content": "#### Pods\n\nEach service is comprised by at least one \"pod\" and, in the case of [replicas](https://docs.nhost.io/platform/service-replicas), you should see as many pods as replicas configured. Each pod is identified by the sevice name + some unique identifier, for instance, `\nhasura-auth-7995bfd767-mvthp`.\n\nPods can come and go for various reasons:\n\n1. When there is a configuration change. When this happens a new pod is created witht he new configuration. After the new pod is ready the old one is decommissioned. This means changes in configuration are hitless. The exception is postgres, when postgres configuration changes we need to bring it down cleanly before we start a new one so there is a short donwtime while this occurrs (1-2 min).\n2. When the process crashes due to an unexpected error. In this case the platform should detect the event and create a new pod immediately.\n3. When the process exceeds its allotted memory the pod is terminated and a new one is created.\n\n#### Throttling\n\nAs pro projects have shared CPUs services can throttle when they attempt to use more resources than they have available. Throttling metrics are hard to grasp but it is important to understand they can have a big impact on response times. Thottling happens in internvals of time (100ms) so it is important to look at both the throttling time and throttling % metrics. If the % is low throttling time might not be very impactful but the higher the percentage gets the higher the impact it can have.\n\nTo avoid throttling consider using [dedicated compute resources](https://docs.nhost.io/platform/compute#dedicated-compute)",
"mode": "markdown"
},
"pluginVersion": "9.2.0",
"pluginVersion": "11.2.0",
"type": "text"
},
{
@@ -143,7 +105,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "CPU utilization is calculated by calculating the average usage between two datapoints. At maximum granularity this is a 1 minute average, but when selecting longer periods of time granularity can decrease.\n\nGiven that the graph shows average usage it might be difficult to detect very sudden spikes.",
"fieldConfig": {
@@ -152,11 +114,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -165,6 +129,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -222,7 +187,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (irate(container_cpu_usage_seconds_total{container!~\"POD|\"}[$__rate_interval])) * 1000",
@@ -238,7 +203,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -246,11 +211,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -259,6 +226,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -316,7 +284,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by (pod) (container_memory_usage_bytes{container!~\"POD|\"})",
@@ -331,7 +299,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "When a service lacks resources the process is throttled until resources are available again.\n\nThis graph shows for how long pods are being throttled. This can incur in added latency to requests.",
"fieldConfig": {
@@ -340,11 +308,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -353,6 +323,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -411,7 +382,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (rate(container_cpu_cfs_throttled_seconds_total{container!~\"POD|\"}[$__rate_interval]))",
@@ -428,7 +399,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "When a service lacks resources the process is throttled until resources are available again.\n\nThis graph shows how often the process is being throttled. As throttling happens in intervals of 100ms here you can see how many of those intervals required throttling.",
"fieldConfig": {
@@ -437,11 +408,13 @@
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -450,6 +423,7 @@
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
@@ -508,7 +482,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (rate(container_cpu_cfs_throttled_periods_total{container!~\"POD|\"}[$__rate_interval]))/sum by(pod) (rate(container_cpu_cfs_periods_total{container!~\"POD|\"}[$__rate_interval]))",
@@ -525,7 +499,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -567,8 +541,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -603,7 +576,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (rate(container_network_transmit_bytes_total[$__rate_interval]))",
@@ -615,7 +588,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "- sum by(pod) (rate(container_network_receive_bytes_total[$__rate_interval]))",
@@ -632,7 +605,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This graph shows when a service was restarted. There are two main reasons why a service may be restarted:\n\n- OOMKilled - This means the service tried to use more memory than it has available and had to be restarted. For more information on resources you can check the [documentation](https://docs.nhost.io/platform/compute).\n- Error - This can show for mainly two reasons; when new configuration needs to be applied the service is terminated and due to limitations this shows as \"Error\" but it is, in fact, part of normal operations. This can also show if your service is misconfigured and/or can't start correctly for some reason. If this error doesn't show constantly it is safe to ignore this error.",
"fieldConfig": {
@@ -676,8 +649,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -712,13 +684,13 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(container, reason) (increase(pod_terminated_total[$__rate_interval]))",
"hide": false,
"interval": "2m",
"legendFormat": "{{container}}-{{reason}}",
"legendFormat": "{{ print "{{container}}-{{reason}}" }}",
"range": true,
"refId": "A"
}
@@ -729,7 +701,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -771,8 +743,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -807,7 +778,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(ingress) (irate(nginx_ingress_controller_response_size_sum[$__rate_interval]))",
@@ -819,7 +790,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum(irate(fastly_prom_exporter_bytes_sent[$__rate_interval]))",
@@ -836,7 +807,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -878,8 +849,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -913,7 +883,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "builder",
"expr": "sum by(ingress) (irate(nginx_ingress_controller_requests[$__interval]))",
@@ -925,7 +895,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "builder",
"expr": "sum(irate(fastly_prom_exporter_requests_total[$__interval]))",
@@ -955,7 +925,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "CPU utilization is calculated by calculating the average usage between two datapoints. At maximum granularity this is a 1 minute average, but when selecting longer periods of time granularity can decrease.\n\nGiven that the graph shows average usage it might be difficult to detect very sudden spikes.\n\nThe allotted line indicates how many CPU cycles are dedicated for the service. As free and pro projects have only shared CPU this line should show only a symbolic number. For projects with [dedicated compute resources](https://docs.nhost.io/platform/compute) this line should match the amount of resources configured.",
"fieldConfig": {
@@ -998,8 +968,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1033,24 +1002,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (irate(container_cpu_usage_seconds_total{container=\"postgres\"}[$__rate_interval])) * 1000",
"interval": "2m",
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_cpu_shares{container=\"postgres\"}) / 1.024",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1061,7 +1030,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This graph shows memory utilization for the service. The allotted line shows what's the amount of memory a service is allowed to consume. As resources are shared there is the possibility that the actual memory available is slightly lower. If a service exceeds the amount of memory it can use, it is restarted automatically.",
"fieldConfig": {
@@ -1104,8 +1073,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1140,24 +1108,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_memory_usage_bytes{container=\"postgres\"})",
"hide": false,
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_memory_limit_bytes{container=\"postgres\"})",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1168,7 +1136,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This shows the amount of data utilized by the postgres volume. This number may differ from the database size reported by postgres depending on the features configured. For instance, when archiving is enabled postgres needs to write on disk a lot of supporting files which might lead to big increase in disk usage.\n\nWhen postgres runs out of disk space it fails to start so it is important to ensure you don't fill the volume. If you need to increase your disk capacity don't hesitate to let us know.",
"fieldConfig": {
@@ -1211,8 +1179,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1247,7 +1214,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "kubelet_volume_stats_used_bytes{persistentvolumeclaim=\"postgres-pv-claim\"}",
@@ -1258,7 +1225,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "kubelet_volume_stats_capacity_bytes{persistentvolumeclaim=\"postgres-pv-claim\"}",
@@ -1274,7 +1241,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"fieldConfig": {
"defaults": {
@@ -1316,8 +1283,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1352,7 +1318,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (rate(container_fs_reads_bytes_total{container=\"postgres\"}[$__rate_interval]))",
@@ -1364,7 +1330,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (rate(container_fs_writes_bytes_total{container=\"postgres\"}[$__rate_interval]))",
@@ -1394,7 +1360,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "CPU utilization is calculated by calculating the average usage between two datapoints. At maximum granularity this is a 1 minute average, but when selecting longer periods of time granularity can decrease.\n\nGiven that the graph shows average usage it might be difficult to detect very sudden spikes.\n\nThe allotted line indicates how many CPU cycles are dedicated for the service. As free and pro projects have only shared CPU this line should show only a symbolic number. For projects with [dedicated compute resources](https://docs.nhost.io/platform/compute) this line should match the amount of resources configured.",
"fieldConfig": {
@@ -1437,8 +1403,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1472,24 +1437,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (irate(container_cpu_usage_seconds_total{container=\"hasura\"}[$__rate_interval])) * 1000",
"interval": "2m",
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_cpu_shares{container=\"hasura\"}) / 1.024",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1500,7 +1465,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This graph shows memory utilization for the service. The allotted line shows what's the amount of memory a service is allowed to consume. As resources are shared there is the possibility that the actual memory available is slightly lower. If a service exceeds the amount of memory it can use, it is restarted automatically.",
"fieldConfig": {
@@ -1543,8 +1508,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1579,24 +1543,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_memory_usage_bytes{container=\"hasura\"})",
"hide": false,
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_memory_limit_bytes{container=\"hasura\"})",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1620,7 +1584,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "CPU utilization is calculated by calculating the average usage between two datapoints. At maximum granularity this is a 1 minute average, but when selecting longer periods of time granularity can decrease.\n\nGiven that the graph shows average usage it might be difficult to detect very sudden spikes.\n\nThe allotted line indicates how many CPU cycles are dedicated for the service. As free and pro projects have only shared CPU this line should show only a symbolic number. For projects with [dedicated compute resources](https://docs.nhost.io/platform/compute) this line should match the amount of resources configured.",
"fieldConfig": {
@@ -1663,8 +1627,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1698,24 +1661,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (irate(container_cpu_usage_seconds_total{container=\"hasura-auth\"}[$__rate_interval])) * 1000",
"interval": "2m",
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_cpu_shares{container=\"hasura-auth\"}) / 1.024",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1726,7 +1689,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This graph shows memory utilization for the service. The allotted line shows what's the amount of memory a service is allowed to consume. As resources are shared there is the possibility that the actual memory available is slightly lower. If a service exceeds the amount of memory it can use, it is restarted automatically.",
"fieldConfig": {
@@ -1769,8 +1732,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1805,24 +1767,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_memory_usage_bytes{container=\"hasura-auth\"})",
"hide": false,
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_memory_limit_bytes{container=\"hasura-auth\"})",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1846,7 +1808,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "CPU utilization is calculated by calculating the average usage between two datapoints. At maximum granularity this is a 1 minute average, but when selecting longer periods of time granularity can decrease.\n\nGiven that the graph shows average usage it might be difficult to detect very sudden spikes.\n\nThe allotted line indicates how many CPU cycles are dedicated for the service. As free and pro projects have only shared CPU this line should show only a symbolic number. For projects with [dedicated compute resources](https://docs.nhost.io/platform/compute) this line should match the amount of resources configured.",
"fieldConfig": {
@@ -1889,8 +1851,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -1924,24 +1885,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (irate(container_cpu_usage_seconds_total{container=\"hasura-storage\"}[$__rate_interval])) * 1000",
"interval": "2m",
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_cpu_shares{container=\"hasura-storage\"}) / 1.024",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -1952,7 +1913,7 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"description": "This graph shows memory utilization for the service. The allotted line shows what's the amount of memory a service is allowed to consume. As resources are shared there is the possibility that the actual memory available is slightly lower. If a service exceeds the amount of memory it can use, it is restarted automatically.",
"fieldConfig": {
@@ -1995,8 +1956,7 @@
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
"color": "green"
},
{
"color": "red",
@@ -2031,24 +1991,24 @@
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_memory_usage_bytes{container=\"hasura-storage\"})",
"hide": false,
"legendFormat": "{{pod}}-used",
"legendFormat": "{{ print "{{pod}}-used" }}",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
"uid": "nhost"
},
"editorMode": "code",
"expr": "sum by(pod) (container_spec_memory_limit_bytes{container=\"hasura-storage\"})",
"hide": false,
"legendFormat": "{{pod}}-allotted",
"legendFormat": "{{ print "{{pod}}-alloted" }}",
"range": true,
"refId": "B"
}
@@ -2057,31 +2017,12 @@
"type": "timeseries"
}
],
"schemaVersion": 37,
"style": "dark",
"schemaVersion": 39,
"tags": [
"nhost"
],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 2,
"includeAll": false,
"multi": false,
"name": "DS_PROMETHEUS",
"options": [],
"query": "prometheus",
"refresh": 1,
"regex": "Prometheus",
"skipUrlSync": false,
"type": "datasource"
}
]
"list": []
},
"time": {
"from": "now-6h",

View File

@@ -0,0 +1,10 @@
apiVersion: 1
providers:
- disableDeletion: false
editable: false
folder: "Nhost - {{ .Subdomain }} ({{ .ProjectName }})"
name: default
options:
path: /var/lib/grafana/dashboards/default
orgId: 1
type: file

View File

@@ -0,0 +1,17 @@
apiVersion: 1
datasources:
- access: proxy
isDefault: true
name: Nhost
type: prometheus
url: http://amp-signer.nhost-services:8080
uid: nhost
jsonData:
customQueryParameters: app_id=${APP_ID}
httpHeaderName1: 'Authorization'
manageAlerts: false
cacheLevel: 'High'
disableRecordingRules: true
timeInterval: '60s'
secureJsonData:
httpHeaderValue1: 'Bearer ${TOKEN}'

23
observability/grafana.ini Normal file
View File

@@ -0,0 +1,23 @@
[analytics]
check_for_updates = false
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /var/lib/grafana/provisioning
[server]
domain = ''
root_url = '{{ .RootURL }}'
{{ if .SMTP }}
[smtp]
enabled=true
host={{ .SMTP.Host }}:{{ .SMTP.Port }}
user={{ .SMTP.User }}
password={{ .SMTP.Password }}
from_address={{ .SMTP.Sender }}
{{ end }}

View File

@@ -0,0 +1,7 @@
apiVersion: 1
policies:
- orgId: 1
receiver: Nhost Managed Contacts
group_by:
- grafana_folder
- alertname

View File

@@ -0,0 +1,369 @@
apiVersion: 1
groups:
- orgId: 1
name: core
folder: "Nhost - {{ .Subdomain }} ({{ .ProjectName }})"
interval: 5m
rules:
- uid: nhosthighcpuusage
title: High CPU usage
condition: B
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: nhost
model:
editorMode: code
expr: sum by(pod) (irate(container_cpu_usage_seconds_total{container!~"grafana|POD|"}[$__rate_interval])) / (sum by(pod) (container_spec_cpu_quota{container!~"grafana|POD|"}) / sum by(pod) (container_spec_cpu_period{container!~"POD|"})) * 100
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: A
- refId: B
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 75
type: gt
operator:
type: and
query:
params:
- C
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: __expr__
expression: A
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: threshold
noDataState: NoData
execErrState: Error
for: 5m
annotations:
runbook_url: https://docs.nhost.io/platform/compute-resources
Project Subdomain: {{ .Subdomain }}
Project Name: {{ .ProjectName }}
description: |
High CPU usage can be caused by a number of factors, including but not limited to:
- High traffic
- Inefficient code/queries
- Inadequate resources
To resolve this issue, consider the following:
- Optimize your code/queries
- Increase the number of replicas
- Increase the CPU resources allocated to your service
High CPU usage can lead to service instability, increased latency and downtime.
For more information, see the [Nhost documentation](https://docs.nhost.io/platform/compute-resources)
summary: |
The service replica {{ print "{{ index $labels \"pod\" }}" }} is experiencing, or has experienced, high CPU usage. Current usage is at {{ print "{{ index $values \"A\" }}" }}%.
labels: {}
isPaused: false
- uid: nhostlowdiskspace
title: Low disk space
condition: B
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: nhost
model:
editorMode: code
expr: sum by(persistentvolumeclaim) (kubelet_volume_stats_used_bytes) / sum by(persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes) * 100
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: A
- refId: B
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 75
type: gt
operator:
type: and
query:
params:
- C
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: __expr__
expression: A
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: threshold
noDataState: NoData
execErrState: Error
for: 5m
annotations:
runbook_url: https://docs.nhost.io/guides/database/configuring-postgres
Subdomain: {{ .Subdomain }}
Project Name: {{ .ProjectName }}
description: |
An increase in disk space usage can be caused by a number of factors, including but not limited to:
- Large amounts of data
- Changing in WAL settings
To resolve this issue, consider the following:
- If you recently changed your WAL settings, consider reverting to the previous settings
- Optimize your database tables
- Remove data that is no longer needed
- Increase the disk space allocated to your database
Running out of disk space can lead to service downtime and potential data loss.
For more information, see the [Nhost documentation](https://docs.nhost.io/guides/database/configuring-postgres)
summary: |
The persistent volume claim {{ print "{{ index $labels \"persistentvolumeclaim\" }}" }} current usage is at {{ print "{{ index $values \"A\" }}" }}%.
labels: {}
isPaused: false
- uid: nhostlowmemory
title: Low free memory
condition: B
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: nhost
model:
editorMode: code
expr: sum by(pod) (container_memory_usage_bytes{container!~"grafana|"}) / sum by(pod) (container_spec_memory_limit_bytes{container!~"grafana|"}) * 100
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: A
- refId: B
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 75
type: gt
operator:
type: and
query:
params:
- C
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: __expr__
expression: A
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: threshold
noDataState: NoData
execErrState: Error
for: 5m
annotations:
runbook_url: https://docs.nhost.io/platform/compute-resources
Subdomain: {{ .Subdomain }}
Project Name: {{ .ProjectName }}
description: |
Low memory can be caused by a number of factors, including but not limited to:
- High traffic
- Inefficient code/queries
- Inadequate resources
To resolve this issue, consider the following:
- Optimize your code/queries
- Increase the memory resources allocated to your service
Running out of memory can lead to service instability, increased latency and downtime.
For more information, see the [Nhost documentation](https://docs.nhost.io/platform/compute-resources)
summary: |
The service replica {{ print "{{ index $labels \"pod\" }}" }} is experiencing, or has experienced, low memory. Current usage is at {{ print "{{ index $values \"A\" }}" }}%.
labels: {}
isPaused: false
- uid: nhostoom
title: Service restarted due to lack of memory
condition: B
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: nhost
model:
editorMode: code
expr: sum by(pod) (increase(pod_terminated_total{reason="OOMKilled", pod!="grafana"}[$__rate_interval]))
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: A
- refId: B
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 0
type: gt
operator:
type: and
query:
params:
- C
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: __expr__
expression: A
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: threshold
noDataState: OK
execErrState: Error
for: 0s
annotations:
summary: |
The service replica {{ print "{{ index $labels \"pod\" }}" }} has been restarted due to lack of memory.
description: |
When a service runs out of memory and is unable to allocate more, it is terminated by the
OOM Killer. This is primarily caused by trying to allocate more memory than is permitted,
which in turn can be caused by:
- High traffic
- Inefficient code/queries
- Inadequate resources
To resolve this issue, consider the following:
- Optimize your code/queries
- Increase the memory resources allocated to your service
This can lead to service instability, increased latency and downtime.
For more information, see the [Nhost documentation](https://docs.nhost.io/platform/compute-resources)
runbook_url: https://docs.nhost.io/platform/compute-resources
Subdomain: {{ .Subdomain }}
Project Name: {{ .ProjectName }}
labels: {}
isPaused: false
- uid: nhosthigherrorrate
title: High request error rate
condition: B
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: nhost
model:
editorMode: code
expr: sum by(ingress,method) (increase(nginx_ingress_controller_requests{ingress!="grafana",status=~"^[4-5].*"}[$__rate_interval])) / sum by(ingress, method) (increase(nginx_ingress_controller_requests[$__rate_interval])) * 100
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: A
- refId: B
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 25
type: gt
operator:
type: and
query:
params:
- C
reducer:
params: []
type: last
type: query
datasource:
type: __expr__
uid: __expr__
expression: A
intervalMs: 1000
maxDataPoints: 43200
refId: B
type: threshold
noDataState: OK
execErrState: Error
for: 5m
annotations:
Subdomain: {{ .Subdomain }}
Project Name: {{ .ProjectName }}
summary: |
The service {{ print "{{ index $labels \"ingress\" }}" }} is experiencing, or has experienced, a high error rate. Current error rate is at {{ print "{{ index $values \"A\" }}" }}%.
description: |
A high error rate can be caused by a number of factors, including but not limited to:
- High traffic
- Inefficient code/queries
- Inadequate resources
- Network issues
- Code errors
- Permission issues
To resolve this issue, consider the following:
- Observe the service logs for more information
A high error rate means there is something fundamentally wrong with the service or your application. It can lead to service instability, increased latency and downtime.
For more information, see the [Nhost documentation](https://docs.nhost.io/platform/compute-resources)
labels: {}
isPaused: false

View File

@@ -0,0 +1,11 @@
#!/usr/bin/env sh
set -euf
mkdir -p /var/lib/grafana/provisioning/datasources
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APP_ID=$(sed "s/nhost-//g" /var/run/secrets/kubernetes.io/serviceaccount/namespace)
sed "s/\${TOKEN}/$TOKEN/g; s/\${APP_ID}/$APP_ID/g" \
< /datasources.yaml.tmpl \
> /var/lib/grafana/provisioning/datasources/datasources.yaml

View File

@@ -154,7 +154,9 @@
"nth-check": "^2.0.1",
"react": "18.2.0",
"react-dom": "18.2.0",
"@graphiql/react": "^0.22.3"
"@graphiql/react": "^0.22.3",
"send": "^0.19.0",
"dset": "^3.1.4"
}
}
}

179
pnpm-lock.yaml generated
View File

@@ -60,6 +60,8 @@ overrides:
react: 18.2.0
react-dom: 18.2.0
'@graphiql/react': ^0.22.3
send: ^0.19.0
dset: ^3.1.4
importers:
@@ -852,8 +854,8 @@ importers:
specifier: ^4.17.21
version: 4.17.21
express:
specifier: ^4.19.2
version: 4.19.2
specifier: ^4.20.0
version: 4.20.0
typescript:
specifier: ^4.9.5
version: 4.9.5
@@ -1040,7 +1042,7 @@ importers:
devDependencies:
'@nhost/nhost-js':
specifier: ^3.1.5
version: 3.1.8(graphql@16.8.1)
version: 3.1.9(graphql@16.8.1)
'@playwright/test':
specifier: ^1.41.0
version: 1.41.0
@@ -2626,7 +2628,7 @@ packages:
'@babel/traverse': 7.25.4
'@babel/types': 7.25.4
convert-source-map: 1.9.0
debug: 4.3.6
debug: 4.3.7
gensync: 1.0.0-beta.2
json5: 2.2.3
lodash: 4.17.21
@@ -2744,7 +2746,7 @@ packages:
'@babel/helper-module-imports': 7.24.7
'@babel/helper-plugin-utils': 7.24.8
'@babel/traverse': 7.25.4
debug: 4.3.6
debug: 4.3.7
lodash.debounce: 4.0.8
resolve: 1.22.8
semver: 7.6.3
@@ -2760,7 +2762,7 @@ packages:
'@babel/core': 7.25.2
'@babel/helper-compilation-targets': 7.25.2
'@babel/helper-plugin-utils': 7.24.8
debug: 4.3.6
debug: 4.3.7
lodash.debounce: 4.0.8
resolve: 1.22.8
transitivePeerDependencies:
@@ -6914,7 +6916,7 @@ packages:
dependencies:
'@graphql-typed-document-node/core': 3.2.0(graphql@16.8.1)
cross-inspect: 1.0.1
dset: 3.1.3
dset: 3.1.4
graphql: 16.8.1
tslib: 2.7.0
dev: true
@@ -6978,7 +6980,7 @@ packages:
'@graphql-typed-document-node/core': 3.2.0(graphql@16.8.1)
'@graphql-yoga/subscription': 2.2.3
'@whatwg-node/fetch': 0.3.2
dset: 3.1.3
dset: 3.1.4
graphql: 16.8.1
tslib: 2.7.0
transitivePeerDependencies:
@@ -7144,7 +7146,7 @@ packages:
'@antfu/install-pkg': 0.1.1
'@antfu/utils': 0.7.10
'@iconify/types': 1.1.0
debug: 4.3.6
debug: 4.3.7
kolorist: 1.8.0
local-pkg: 0.4.3
transitivePeerDependencies:
@@ -8344,7 +8346,7 @@ packages:
'@octokit/rest': 19.0.13
chalk: 5.3.0
chokidar: 3.6.0
express: 4.19.2
express: 4.20.0
fs-extra: 11.2.0
got: 13.0.0
gray-matter: 4.0.3
@@ -8468,7 +8470,7 @@ packages:
'@open-draft/until': 1.0.3
'@types/debug': 4.1.12
'@xmldom/xmldom': 0.8.10
debug: 4.3.6
debug: 4.3.7
headers-polyfill: 3.2.5
outvariant: 1.4.3
strict-event-emitter: 0.2.8
@@ -8838,8 +8840,8 @@ packages:
- encoding
dev: true
/@nhost/hasura-auth-js@2.5.5:
resolution: {integrity: sha512-+7IfhWwUHtq+ZNnTYYDWHpvAbGzSH9yvOrtILZeMxuA9rrkpNPVghR9uiFg8D2qoTpyTOszmCP0wJyEyO8pXSQ==}
/@nhost/hasura-auth-js@2.5.6:
resolution: {integrity: sha512-ZW2gBmHdfkyGcDRvR9sKzcYEVCq2Df6muVg/JlOkizeS+a6r39gwMF3cTqSZzYoURmf750h4gc92F7/IexrYOg==}
dependencies:
'@simplewebauthn/browser': 9.0.1
fetch-ponyfill: 7.1.0
@@ -8861,13 +8863,13 @@ packages:
- encoding
dev: true
/@nhost/nhost-js@3.1.8(graphql@16.8.1):
resolution: {integrity: sha512-E09byZVyuUdaRMKjk+Xdrhoz3RdV/IYhIMN/i7pIzArqiE2Qx2RIE8BMQmDFEyuemiCmUg0sXdU6l60qwV8ueA==}
/@nhost/nhost-js@3.1.9(graphql@16.8.1):
resolution: {integrity: sha512-5JQ5aEZyKD2cXX3NrK2b2PRJ9xndGLWoI0HSFE9fhWIXSHDGTgVc7LqkJe2zvN3gSwmg32Ch02Yu5tgpn6HI7A==}
peerDependencies:
graphql: '>=16.8.1'
dependencies:
'@nhost/graphql-js': 0.3.0(graphql@16.8.1)
'@nhost/hasura-auth-js': 2.5.5
'@nhost/hasura-auth-js': 2.5.6
'@nhost/hasura-storage-js': 2.5.1
graphql: 16.8.1
isomorphic-unfetch: 3.1.0
@@ -9282,7 +9284,7 @@ packages:
engines: {node: '>=18'}
hasBin: true
dependencies:
debug: 4.3.6
debug: 4.3.7
extract-zip: 2.0.1
progress: 2.0.3
proxy-agent: 6.4.0
@@ -10021,7 +10023,7 @@ packages:
/@react-native-community/cli-debugger-ui@12.3.6:
resolution: {integrity: sha512-SjUKKsx5FmcK9G6Pb6UBFT0s9JexVStK5WInmANw75Hm7YokVvHEgtprQDz2Uvy5znX5g2ujzrkIU//T15KQzA==}
dependencies:
serve-static: 1.15.0
serve-static: 1.16.0
transitivePeerDependencies:
- supports-color
dev: false
@@ -10113,7 +10115,7 @@ packages:
errorhandler: 1.5.1
nocache: 3.0.4
pretty-format: 26.6.2
serve-static: 1.15.0
serve-static: 1.16.0
ws: 7.5.10
transitivePeerDependencies:
- bufferutil
@@ -10317,7 +10319,7 @@ packages:
debug: 2.6.9
node-fetch: 2.7.0(encoding@0.1.13)
open: 7.4.2
serve-static: 1.15.0
serve-static: 1.16.0
temp-dir: 2.0.0
ws: 6.2.3
transitivePeerDependencies:
@@ -11581,7 +11583,7 @@ packages:
babel-plugin-polyfill-corejs3: 0.1.7(@babel/core@7.25.2)
chalk: 4.1.2
core-js: 3.38.1
express: 4.19.2
express: 4.20.0
file-system-cache: 1.1.0
find-up: 5.0.0
fork-ts-checker-webpack-plugin: 6.5.3(eslint@8.57.0)(typescript@4.9.5)(webpack@5.94.0)
@@ -11741,7 +11743,7 @@ packages:
chalk: 4.1.2
core-js: 3.38.1
css-loader: 5.2.7(webpack@5.94.0)
express: 4.19.2
express: 4.20.0
find-up: 5.0.0
fs-extra: 9.1.0
html-webpack-plugin: 5.6.0(webpack@5.94.0)
@@ -12104,7 +12106,7 @@ packages:
vite: '>=4.3.9'
dependencies:
'@sveltejs/vite-plugin-svelte': 2.5.3(svelte@4.2.19)(vite@5.4.2)
debug: 4.3.6
debug: 4.3.7
svelte: 4.2.19
vite: 5.4.2(@types/node@16.18.106)
transitivePeerDependencies:
@@ -12119,7 +12121,7 @@ packages:
vite: '>=4.3.9'
dependencies:
'@sveltejs/vite-plugin-svelte-inspector': 1.0.4(@sveltejs/vite-plugin-svelte@2.5.3)(svelte@4.2.19)(vite@5.4.2)
debug: 4.3.6
debug: 4.3.7
deepmerge: 4.3.1
kleur: 4.1.5
magic-string: 0.30.11
@@ -13552,7 +13554,7 @@ packages:
dependencies:
'@typescript-eslint/typescript-estree': 8.3.0(typescript@5.5.4)
'@typescript-eslint/utils': 8.3.0(eslint@9.9.1)(typescript@5.5.4)
debug: 4.3.6
debug: 4.3.7
ts-api-utils: 1.3.0(typescript@5.5.4)
typescript: 5.5.4
transitivePeerDependencies:
@@ -13669,7 +13671,7 @@ packages:
dependencies:
'@typescript-eslint/types': 8.3.0
'@typescript-eslint/visitor-keys': 8.3.0
debug: 4.3.6
debug: 4.3.7
fast-glob: 3.3.2
is-glob: 4.0.3
minimatch: 9.0.5
@@ -14739,7 +14741,7 @@ packages:
resolution: {integrity: sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==}
engines: {node: '>= 6.0.0'}
dependencies:
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -14747,7 +14749,7 @@ packages:
resolution: {integrity: sha512-H0TSyFNDMomMNJQBn8wFV5YC/2eJ+VXECwOadZJT554xP6cODZHPX3H9QMQECxvrgiSOP1pHjy1sMWQVYJOUOA==}
engines: {node: '>= 14'}
dependencies:
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -15643,8 +15645,8 @@ packages:
dev: true
optional: true
/bare-fs@2.3.1:
resolution: {integrity: sha512-W/Hfxc/6VehXlsgFtbB5B4xFcsCl+pAh30cYhoFyXErf6oGrwjh8SwiPAdHgpmWonKuYpZgGywN0SXt7dgsADA==}
/bare-fs@2.3.4:
resolution: {integrity: sha512-7YyxitZEq0ey5loOF5gdo1fZQFF7290GziT+VbAJ+JbYTJYaPZwuEz2r/Nq23sm4fjyTgUf2uJI2gkT3xAuSYA==}
requiresBuild: true
dependencies:
bare-events: 2.4.2
@@ -15751,8 +15753,8 @@ packages:
resolution: {integrity: sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==}
dev: false
/body-parser@1.20.2:
resolution: {integrity: sha512-ml9pReCu3M61kGlqoTm2umSXTlRTuGTx0bfYj+uIUKKYycG5NtSbeetV3faSU6R7ajOPw0g/J1PvK4qNy7s5bA==}
/body-parser@1.20.3:
resolution: {integrity: sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==}
engines: {node: '>= 0.8', npm: 1.2.8000 || >= 1.4.16}
dependencies:
bytes: 3.1.2
@@ -15763,7 +15765,7 @@ packages:
http-errors: 2.0.0
iconv-lite: 0.4.24
on-finished: 2.4.1
qs: 6.11.0
qs: 6.13.0
raw-body: 2.5.2
type-is: 1.6.18
unpipe: 1.0.0
@@ -17256,6 +17258,17 @@ packages:
dependencies:
ms: 2.1.2
/debug@4.3.7:
resolution: {integrity: sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ==}
engines: {node: '>=6.0'}
peerDependencies:
supports-color: '*'
peerDependenciesMeta:
supports-color:
optional: true
dependencies:
ms: 2.1.3
/decamelize@1.2.0:
resolution: {integrity: sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==}
engines: {node: '>=0.10.0'}
@@ -17482,7 +17495,7 @@ packages:
hasBin: true
dependencies:
address: 1.2.2
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
dev: true
@@ -17687,8 +17700,8 @@ packages:
engines: {node: '>=10'}
dev: true
/dset@3.1.3:
resolution: {integrity: sha512-20TuZZHCEZ2O71q9/+8BwKwZ0QtD9D8ObhrihJPr+vLLYlSuAU3/zL4cSlgbfeoGHTjCSJBa7NGcrF9/Bx/WJQ==}
/dset@3.1.4:
resolution: {integrity: sha512-2QF/g9/zTaPDc3BjNcVTGoBbXBgYfMTTceLaYcFJ/W9kggFUkhxD/hMEeuLKbugyef9SqAx8cpgwlIP/jinUTA==}
engines: {node: '>=4'}
/duplexer2@0.1.4:
@@ -17781,6 +17794,10 @@ packages:
resolution: {integrity: sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==}
engines: {node: '>= 0.8'}
/encodeurl@2.0.0:
resolution: {integrity: sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==}
engines: {node: '>= 0.8'}
/encoding@0.1.13:
resolution: {integrity: sha512-ETBauow1T35Y/WZMkio9jiM0Z5xjHHmJ4XmjZOq1l/dXz3lr2sRn87nJy20RupqSh1F2m3HHPSp8ShIPQJrJ3A==}
dependencies:
@@ -17807,7 +17824,7 @@ packages:
base64id: 2.0.0
cookie: 0.4.2
cors: 2.8.5
debug: 4.3.6
debug: 4.3.7
engine.io-parser: 5.2.3
ws: 8.17.1
transitivePeerDependencies:
@@ -18039,7 +18056,7 @@ packages:
peerDependencies:
esbuild: '>=0.12 <1'
dependencies:
debug: 4.3.6
debug: 4.3.7
esbuild: 0.18.20
transitivePeerDependencies:
- supports-color
@@ -19071,7 +19088,7 @@ packages:
peerDependencies:
eslint: '>=6.0.0'
dependencies:
debug: 4.3.6
debug: 4.3.7
eslint: 8.57.0
lodash: 4.17.21
natural-compare: 1.4.0
@@ -19530,36 +19547,36 @@ packages:
/exponential-backoff@3.1.1:
resolution: {integrity: sha512-dX7e/LHVJ6W3DE1MHWi9S1EYzDESENfLrYohG2G++ovZrYOkm4Knwa0mc1cn84xJOR4KEU0WSchhLbd0UklbHw==}
/express@4.19.2:
resolution: {integrity: sha512-5T6nhjsT+EOMzuck8JjBHARTHfMht0POzlA60WV2pMD3gyXw2LZnZ+ueGdNxG+0calOJcWKbpFcuzLZ91YWq9Q==}
/express@4.20.0:
resolution: {integrity: sha512-pLdae7I6QqShF5PnNTCVn4hI91Dx0Grkn2+IAsMTgMIKuQVte2dN9PeGSSAME2FR8anOhVA62QDIUaWVfEXVLw==}
engines: {node: '>= 0.10.0'}
dependencies:
accepts: 1.3.8
array-flatten: 1.1.1
body-parser: 1.20.2
body-parser: 1.20.3
content-disposition: 0.5.4
content-type: 1.0.5
cookie: 0.6.0
cookie-signature: 1.0.6
debug: 2.6.9
depd: 2.0.0
encodeurl: 1.0.2
encodeurl: 2.0.0
escape-html: 1.0.3
etag: 1.8.1
finalhandler: 1.2.0
fresh: 0.5.2
http-errors: 2.0.0
merge-descriptors: 1.0.1
merge-descriptors: 1.0.3
methods: 1.1.2
on-finished: 2.4.1
parseurl: 1.3.3
path-to-regexp: 0.1.7
path-to-regexp: 0.1.10
proxy-addr: 2.0.7
qs: 6.11.0
range-parser: 1.2.1
safe-buffer: 5.2.1
send: 0.18.0
serve-static: 1.15.0
send: 0.19.0
serve-static: 1.16.0
setprototypeof: 1.2.0
statuses: 2.0.1
type-is: 1.6.18
@@ -19609,7 +19626,7 @@ packages:
engines: {node: '>= 10.17.0'}
hasBin: true
dependencies:
debug: 4.3.6
debug: 4.3.7
get-stream: 5.2.0
yauzl: 2.10.0
optionalDependencies:
@@ -20339,7 +20356,7 @@ packages:
dependencies:
basic-ftp: 5.0.5
data-uri-to-buffer: 6.0.2
debug: 4.3.6
debug: 4.3.7
fs-extra: 11.2.0
transitivePeerDependencies:
- supports-color
@@ -20742,7 +20759,7 @@ packages:
'@graphql-yoga/subscription': 3.1.0
'@whatwg-node/fetch': 0.8.8
'@whatwg-node/server': 0.7.7
dset: 3.1.3
dset: 3.1.4
graphql: 16.8.1
lru-cache: 7.18.3
tslib: 2.7.0
@@ -21386,7 +21403,7 @@ packages:
dependencies:
'@tootallnate/once': 1.1.2
agent-base: 6.0.2
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -21396,7 +21413,7 @@ packages:
dependencies:
'@tootallnate/once': 2.0.0
agent-base: 6.0.2
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -21405,7 +21422,7 @@ packages:
engines: {node: '>= 14'}
dependencies:
agent-base: 7.1.1
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
dev: true
@@ -21453,7 +21470,7 @@ packages:
engines: {node: '>= 6'}
dependencies:
agent-base: 6.0.2
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -21462,7 +21479,7 @@ packages:
engines: {node: '>= 14'}
dependencies:
agent-base: 7.1.1
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
@@ -22253,7 +22270,7 @@ packages:
resolution: {integrity: sha512-n3s8EwkdFIJCG3BPKBYvskgXGoy88ARzvegkitk60NxRdwltLOTaH7CUiMRXvwYorl0Q712iEjcWB+fK/MrWVw==}
engines: {node: '>=10'}
dependencies:
debug: 4.3.6
debug: 4.3.7
istanbul-lib-coverage: 3.2.2
source-map: 0.6.1
transitivePeerDependencies:
@@ -24739,8 +24756,8 @@ packages:
engines: {node: '>= 0.10.0'}
dev: true
/merge-descriptors@1.0.1:
resolution: {integrity: sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w==}
/merge-descriptors@1.0.3:
resolution: {integrity: sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==}
/merge-options@3.0.4:
resolution: {integrity: sha512-2Sug1+knBjkaMsMgf1ctR1Ujx+Ayku4EdJN4Z+C2+JzoeF7A3OZ9KM2GY0CpQS51NR61LTurMJrRKPhSs3ZRTQ==}
@@ -25552,7 +25569,7 @@ packages:
/micromark@2.11.4:
resolution: {integrity: sha512-+WoovN/ppKolQOFIAajxi7Lu9kInbPxFuTBVEavFcL8eAfVstoc5MocPmqBeAdBOJV00uaVjegzH4+MA0DN/uA==}
dependencies:
debug: 4.3.6
debug: 4.3.7
parse-entities: 2.0.0
transitivePeerDependencies:
- supports-color
@@ -25562,7 +25579,7 @@ packages:
resolution: {integrity: sha512-uD66tJj54JLYq0De10AhWycZWGQNUvDI55xPgk2sQM5kn1JYlhbCMTtEeT27+vAhW2FBQxLlOmS3pmA7/2z4aA==}
dependencies:
'@types/debug': 4.1.12
debug: 4.3.6
debug: 4.3.7
decode-named-character-reference: 1.0.2
micromark-core-commonmark: 1.1.0
micromark-factory-space: 1.1.0
@@ -25586,7 +25603,7 @@ packages:
resolution: {integrity: sha512-o/sd0nMof8kYff+TqcDx3VSrgBTcZpSvYcAHIfHhv5VAuNmisCxjhx6YmxS8PFEpb9z5WKWKPdzf0jM23ro3RQ==}
dependencies:
'@types/debug': 4.1.12
debug: 4.3.6
debug: 4.3.7
decode-named-character-reference: 1.0.2
devlop: 1.1.0
micromark-core-commonmark: 2.0.1
@@ -26613,7 +26630,7 @@ packages:
dependencies:
'@tootallnate/quickjs-emscripten': 0.23.0
agent-base: 7.1.1
debug: 4.3.6
debug: 4.3.7
get-uri: 6.0.3
http-proxy-agent: 7.0.2
https-proxy-agent: 7.0.5
@@ -26791,8 +26808,8 @@ packages:
lru-cache: 10.4.3
minipass: 7.1.2
/path-to-regexp@0.1.7:
resolution: {integrity: sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ==}
/path-to-regexp@0.1.10:
resolution: {integrity: sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w==}
/path-to-regexp@6.2.2:
resolution: {integrity: sha512-GQX3SSMokngb36+whdpRXE+3f9V8UzyAorlYvOGx87ufGHehNTn5lCxrKtLyZ4Yl/wEKnNnr98ZzOwwDZV5ogw==}
@@ -28190,7 +28207,7 @@ packages:
engines: {node: '>= 14'}
dependencies:
agent-base: 7.1.1
debug: 4.3.6
debug: 4.3.7
http-proxy-agent: 7.0.2
https-proxy-agent: 7.0.5
lru-cache: 7.18.3
@@ -28249,7 +28266,7 @@ packages:
dependencies:
'@puppeteer/browsers': 2.3.0
chromium-bidi: 0.6.3(devtools-protocol@0.0.1312386)
debug: 4.3.6
debug: 4.3.7
devtools-protocol: 0.0.1312386
ws: 8.18.0
transitivePeerDependencies:
@@ -30127,8 +30144,8 @@ packages:
engines: {node: '>=10'}
hasBin: true
/send@0.18.0:
resolution: {integrity: sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==}
/send@0.19.0:
resolution: {integrity: sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==}
engines: {node: '>= 0.8.0'}
dependencies:
debug: 2.6.9
@@ -30185,14 +30202,14 @@ packages:
- supports-color
dev: false
/serve-static@1.15.0:
resolution: {integrity: sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==}
/serve-static@1.16.0:
resolution: {integrity: sha512-pDLK8zwl2eKaYrs8mrPZBJua4hMplRWJ1tIFksVC3FtBEBnl8dxgeHtsaMS8DhS9i4fLObaon6ABoc4/hQGdPA==}
engines: {node: '>= 0.8.0'}
dependencies:
encodeurl: 1.0.2
escape-html: 1.0.3
parseurl: 1.3.3
send: 0.18.0
send: 0.19.0
transitivePeerDependencies:
- supports-color
@@ -30433,7 +30450,7 @@ packages:
/socket.io-adapter@2.5.5:
resolution: {integrity: sha512-eLDQas5dzPgOWCk9GuuJC2lBqItuhKI4uxGgo9aIV7MYbk2h9Q6uULEh8WBzThoI7l+qU9Ast9fVUmkqPP9wYg==}
dependencies:
debug: 4.3.6
debug: 4.3.7
ws: 8.17.1
transitivePeerDependencies:
- bufferutil
@@ -30446,7 +30463,7 @@ packages:
engines: {node: '>=10.0.0'}
dependencies:
'@socket.io/component-emitter': 3.1.2
debug: 4.3.6
debug: 4.3.7
transitivePeerDependencies:
- supports-color
dev: true
@@ -30458,7 +30475,7 @@ packages:
accepts: 1.3.8
base64id: 2.0.0
cors: 2.8.5
debug: 4.3.6
debug: 4.3.7
engine.io: 6.5.5
socket.io-adapter: 2.5.5
socket.io-parser: 4.2.4
@@ -30481,7 +30498,7 @@ packages:
engines: {node: '>= 14'}
dependencies:
agent-base: 7.1.1
debug: 4.3.6
debug: 4.3.7
socks: 2.8.3
transitivePeerDependencies:
- supports-color
@@ -30615,7 +30632,7 @@ packages:
/spdy-transport@3.0.0:
resolution: {integrity: sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==}
dependencies:
debug: 4.3.6
debug: 4.3.7
detect-node: 2.1.0
hpack.js: 2.1.6
obuf: 1.1.2
@@ -30629,7 +30646,7 @@ packages:
resolution: {integrity: sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==}
engines: {node: '>=6.0.0'}
dependencies:
debug: 4.3.6
debug: 4.3.7
handle-thing: 2.0.1
http-deceiver: 1.2.7
select-hose: 2.0.0
@@ -31514,7 +31531,7 @@ packages:
pump: 3.0.0
tar-stream: 3.1.7
optionalDependencies:
bare-fs: 2.3.1
bare-fs: 2.3.4
bare-path: 2.1.3
dev: true
@@ -33599,7 +33616,7 @@ packages:
peerDependencies:
eslint: '>=6.0.0'
dependencies:
debug: 4.3.6
debug: 4.3.7
eslint: 8.57.0
eslint-scope: 7.2.2
eslint-visitor-keys: 3.4.3
@@ -33617,7 +33634,7 @@ packages:
peerDependencies:
eslint: '>=6.0.0'
dependencies:
debug: 4.3.6
debug: 4.3.7
eslint: 8.57.0
eslint-scope: 7.2.2
eslint-visitor-keys: 3.4.3
@@ -33942,7 +33959,7 @@ packages:
compression: 1.7.4
connect-history-api-fallback: 2.0.0
default-gateway: 6.0.3
express: 4.19.2
express: 4.20.0
graceful-fs: 4.2.11
html-entities: 2.5.2
http-proxy-middleware: 2.0.6(@types/express@4.17.21)