chore(docs): lint all sections with linter v2 (#31110)
lint(docs): heading capitalization
This commit is contained in:
2
.github/workflows/docs-lint-v2.yml
vendored
2
.github/workflows/docs-lint-v2.yml
vendored
@@ -45,6 +45,6 @@ jobs:
|
||||
run: |
|
||||
set -o pipefail
|
||||
git diff --name-only origin/$BASE_REF HEAD \
|
||||
| { grep -E "^apps/docs/content/guides/(getting-started|ai|api|auth|database|deployment|functions)/" || test $? = 1; } \
|
||||
| { grep -E "^apps/docs/content/guides/" || test $? = 1; } \
|
||||
| xargs -r supa-mdx-lint --format rdf \
|
||||
| reviewdog -f=rdjsonl -reporter=github-pr-review
|
||||
|
||||
@@ -10,7 +10,7 @@ Attempting to create a second Job with the same name (and case) will overwrite t
|
||||
|
||||
</Admonition>
|
||||
|
||||
## Schedule a Job
|
||||
## Schedule a job
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -84,7 +84,7 @@ You can input seconds for your Job schedule interval as long as you're on Postgr
|
||||
|
||||
</Admonition>
|
||||
|
||||
## Edit a Job
|
||||
## Edit a job
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -134,7 +134,7 @@ It is also possible to modify a job by using the `cron.schedule()` function by i
|
||||
</TabPanel>
|
||||
</Tabs>
|
||||
|
||||
## Activate/Deactivate a Job
|
||||
## Activate/Deactivate a job
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -175,7 +175,7 @@ select cron.alter_job(
|
||||
</TabPanel>
|
||||
</Tabs>
|
||||
|
||||
## Unschedule a Job
|
||||
## Unschedule a job
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -213,7 +213,7 @@ Unscheduling a Job will permanently delete the Job from `cron.job` table but its
|
||||
</TabPanel>
|
||||
</Tabs>
|
||||
|
||||
## Inspecting Job Runs
|
||||
## Inspecting job runs
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -328,7 +328,7 @@ This requires the [`pg_net` extension](/docs/guides/database/extensions/pg_net)
|
||||
|
||||
</Admonition>
|
||||
|
||||
## Caution: Scheduling System Maintenance
|
||||
## Caution: Scheduling system maintenance
|
||||
|
||||
Be extremely careful when setting up Jobs for system maintenance tasks as they can have unintended consequences.
|
||||
|
||||
|
||||
@@ -13,6 +13,8 @@ Using OAuth2.0 you can retrieve an access and refresh token that grant your appl
|
||||
2. In the upper-right section of the page, click **Add application**.
|
||||
3. Fill in the required details and click **Confirm**.
|
||||
|
||||
{/* supa-mdx-lint-disable-next-line Rule001HeadingCase */}
|
||||
|
||||
## Show a "Connect Supabase" button
|
||||
|
||||
In your user interface, add a "Connect Supabase" button to kick off the OAuth flow. Follow the design guidelines outlined in our [brand assets](/brand-assets).
|
||||
@@ -157,7 +159,7 @@ When creating a new project, you can either ask the user to provide a database p
|
||||
|
||||
You can configure the user's [custom SMTP settings](https://supabase.com/docs/guides/auth/auth-smtp) using the [`/config/auth` endpoint](https://api.supabase.com/api/v1#/projects%20config/updateV1AuthConfig).
|
||||
|
||||
### Handling Dynamic Redirect URLs
|
||||
### Handling dynamic redirect URLs
|
||||
|
||||
To handle multiple, dynamically generated redirect URLs within the same OAuth app, you can leverage the `state` query parameter. When starting the OAuth process, include the desired, encoded redirect URL in the `state` parameter.
|
||||
Once authorization is complete, we will sends the `state` value back to your app. You can then verify its integrity and extract the correct redirect URL, decoding it and redirecting the user to the correct URL.
|
||||
|
||||
@@ -20,7 +20,7 @@ Vercel Marketplace is currently in Public Alpha. If you encounter any issues or
|
||||
|
||||
## Quickstart
|
||||
|
||||
### Via Template
|
||||
### Via template
|
||||
|
||||
<div className="bg-surface-100 py-4 px-5 border rounded-md not-prose">
|
||||
<h5 className="text-foreground">Deploy a Next.js app with Supabase Vercel Storage now</h5>
|
||||
@@ -34,7 +34,7 @@ Vercel Marketplace is currently in Public Alpha. If you encounter any issues or
|
||||
|
||||
Details coming soon..
|
||||
|
||||
### Connecting to Supabase Project
|
||||
### Connecting to Supabase project
|
||||
|
||||
Supabase Projects created via Vercel Marketplace are automatically synchronized with connected Vercel projects. This synchronization includes setting essential environment variables, such as:
|
||||
|
||||
|
||||
@@ -97,7 +97,7 @@ Develop locally while running the Supabase stack on your machine.
|
||||
|
||||
4. View your local Supabase instance at [http://localhost:54323](http://localhost:54323).
|
||||
|
||||
## Local Development
|
||||
## Local development
|
||||
|
||||
Local development with Supabase allows you to work on your projects in a self-contained environment on your local machine. Working locally has several advantages:
|
||||
|
||||
|
||||
@@ -129,7 +129,7 @@ regexp_contains(event_message, 'hello world\.')
|
||||
regexp_contains(event_message, 'started host|authenticated')
|
||||
```
|
||||
|
||||
### and/or/not statements in SQL:
|
||||
### `and`/`or`/`not` statements in SQL:
|
||||
|
||||
`and`, `or`, and `not` are all native terms in SQL and can be used in conjunction with regular expressions to filter results
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ description: 'Learn what high Swap usage could mean for your Supabase instance a
|
||||
|
||||
Learn what high Swap usage means, what can cause it, and how to solve it.
|
||||
|
||||
## What is Swap for?
|
||||
## What is swap for?
|
||||
|
||||
<Admonition type="tip">
|
||||
|
||||
@@ -20,7 +20,7 @@ Swap is a portion of your instance's disk that is reserved for the operating sys
|
||||
|
||||
Swap can be used even if your instance has plenty of RAM. If this is the case, do not worry. Your instance might try to "preemptively swap" by swapping background processes to make space for your traffic in RAM.
|
||||
|
||||
### When is high Swap concerning?
|
||||
### When is high swap concerning?
|
||||
|
||||
High Swap is concerning if your instance is using all of the available RAM (i.e. consistently using more than 75%).
|
||||
|
||||
@@ -30,7 +30,7 @@ High Swap usage can affect your database performance. For example, you might see
|
||||
- **Degraded performance due to swapping regularly between RAM and disk.**
|
||||
- **Higher Disk I/O due to swapping regularly.**
|
||||
|
||||
## Monitor your Swap
|
||||
## Monitor your swap
|
||||
|
||||
You can check your Swap usage directly on the Supabase Platform. Navigate to the [**Database** page](https://supabase.com/dashboard/project/_/reports/database) of the **Reports** section.
|
||||
|
||||
@@ -45,7 +45,7 @@ Some useful metrics to monitor are (this is not an exhaustive list):
|
||||
- `node_memory_MemTotal_bytes` and `node_memory_MemFree_bytes` - The total RAM and available RAM.
|
||||
- `node_vmstat_pswpin` and `node_vmstat_pswpout` - The number of pages that have been swapped in or out (monitoring this for spikes means that your instance is swapping).
|
||||
|
||||
## Common reasons for high Swap usage
|
||||
## Common reasons for high swap usage
|
||||
|
||||
Everything you do with your Supabase project requires compute. Hence, there can be many reasons for high Swap usage. Here are some common ones:
|
||||
|
||||
@@ -55,7 +55,7 @@ Everything you do with your Supabase project requires compute. Hence, there can
|
||||
- **Workload style:** The usage pattern of your Supabase project might be more read heavy, or involve large amounts of data.
|
||||
- **Extensions:** You might be using extensions that perform intensive operations on large datasets. This increases resource usage.
|
||||
|
||||
## Solving high Swap usage
|
||||
## Solving high swap usage
|
||||
|
||||
If you find that your RAM and Swap usage are high, you have three options:
|
||||
|
||||
|
||||
@@ -51,6 +51,6 @@ The request is not completed within the configured time limit.
|
||||
|
||||
The timeout limit is set to prevent long-running queries which can cause performance issues, increase latency, and potentially even crash the project.
|
||||
|
||||
#### 546 Edge Functions Resource Limit
|
||||
#### 546 Edge Functions resource limit
|
||||
|
||||
Applies only to Edge Functions. Function execution was stopped due to a resource limit (`WORKER_LIMIT`). Edge Function logs should provide which [resource limit](/guides/functions/limits) was exceeded.
|
||||
|
||||
@@ -8,7 +8,7 @@ Log drains will send all logs of the Supabase stack to one or more desired desti
|
||||
|
||||
You can read about the initial announcement [here](https://supabase.com/blog/log-drains) and vote for your preferred drains in [this discussion](https://github.com/orgs/supabase/discussions/28324?sort=top).
|
||||
|
||||
# Supported Destinations
|
||||
# Supported destinations
|
||||
|
||||
The following table lists the supported destinations and the required setup configuration:
|
||||
|
||||
@@ -19,7 +19,7 @@ The following table lists the supported destinations and the required setup conf
|
||||
|
||||
HTTP requests are batched with a max of 250 logs or 1 second intervals, whichever happens first. Logs are compressed via Gzip if the destination supports it.
|
||||
|
||||
## Generic HTTP Endpoint
|
||||
## Generic HTTP endpoint
|
||||
|
||||
Logs are sent as a POST request with a JSON body. Both HTTP/1 and HTTP/2 protocols are supported.
|
||||
Custom headers can optionally be configured for all requests.
|
||||
@@ -132,7 +132,7 @@ Deno.serve(async (req) => {
|
||||
|
||||
</Accordion>
|
||||
|
||||
## Datadog Logs
|
||||
## Datadog logs
|
||||
|
||||
Logs sent to Datadog have the name of the log source set on the `service` field of the event and the source set to `Supabase`. Logs are gzipped before they are sent to Datadog.
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ Each Supabase organization must have at least one owner. If your organization ha
|
||||
|
||||
Otherwise, you'll need to invite a user as **Owner**, and they need to accept the invitation, or promote an existing organization member to **Owner** before you can leave the organization.
|
||||
|
||||
### Organization Scoped Roles vs Project Scoped Roles
|
||||
### Organization scoped roles vs project scoped roles
|
||||
|
||||
<Admonition type="note">
|
||||
|
||||
|
||||
@@ -180,7 +180,7 @@ Restoring to a new project is an excellent way to manage environments more effec
|
||||
|
||||
### Logical backups
|
||||
|
||||
#### search_path issues
|
||||
#### `search_path` issues
|
||||
|
||||
During the `pg_restore` process, the `search_path` is set to an empty string for predictability, and security. Using unqualified references to functions or relations can cause restorations using logical backups to fail, as the database will not be able to locate the function or relation being referenced. This can happen even if the database functions without issues during normal operations, as the `search_path` is usually set to include several schemas during normal operations. Therefore, you should always use schema-qualified names within your SQL code.
|
||||
|
||||
|
||||
@@ -94,7 +94,7 @@ Smaller compute instances like Nano, Micro, and Small have baseline performance
|
||||
|
||||
If you need consistent disk performance, choose the 4XL or larger compute instance. If you're unsure of how much throughput or IOPS your application requires, you can load test your project and inspect these [metrics in the Dashboard](https://supabase.com/dashboard/project/_/reports). If the `Disk IO % consumed` stat is more than 1%, it indicates that your workload has exceeded the baseline IO throughput during the day. If this metric goes to 100%, the workload has used up all available disk IO budget. Projects that use any disk IO budget are good candidates for upgrading to a larger compute instance with higher throughput.
|
||||
|
||||
### Provisioned Disk throughput and IOPS
|
||||
### Provisioned disk throughput and IOPS
|
||||
|
||||
The default disk type is gp3, which comes with a baseline throughput of 125 MiB/s and a default IOPS of 3,000. You can provision additional IOPS and throughput from the [Database Settings](/dashboard/project/_/settings/database) page, but keep in mind that the effective IOPS and throughput will be limited by the compute instance size.
|
||||
|
||||
@@ -130,7 +130,7 @@ Compute instance size changes will not change your selected disk type or disk si
|
||||
- General Purpose (gp3) disks come with a baseline of 3,000 IOPS and 125 MiB/s. You can provision additional 500 IOPS for every GB of disk size and additional 0.25 MiB/s throughput per provisioned IOPS.
|
||||
- High Performance (io2) disks can be provisioned with 1,000 IOPS per GB of disk size.
|
||||
|
||||
## Limits and Constraints
|
||||
## Limits and constraints
|
||||
|
||||
### Postgres replication slots, WAL senders, and connections
|
||||
|
||||
|
||||
@@ -139,7 +139,7 @@ Once you have reclaimed space, you can run the following to disable [read-only](
|
||||
set default_transaction_read_only = 'off';
|
||||
```
|
||||
|
||||
### Disk Size Distribution
|
||||
### Disk size distribution
|
||||
|
||||
You can check the distribution of your disk size on your [project's compute and disk page](/dashboard/_/settings/compute-and-disk).
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ Organizations must have a signed BAA with Supabase and have the Health Insurance
|
||||
|
||||
</Admonition>
|
||||
|
||||
## Configuring a HIPAA Project
|
||||
## Configuring a HIPAA project
|
||||
|
||||
When the HIPAA add-on is enabled on an organization, projects within the organization can be configured as _High Compliance_. This configuration can be found in the [General Project Settings page](https://supabase.com/dashboard/project/_/settings) of the dashboard.
|
||||
Once enabled, additional security checks will be run against the project to ensure the deployed configuration is compliant. These checks are performed on a continual basis and security warnings will appear in the [Security Advisor](https://supabase.com/dashboard/project/_/advisors/security) if a non-compliant setting is detected.
|
||||
|
||||
@@ -69,7 +69,7 @@ hash_config {
|
||||
- `filename.json`: (optional) output filename (defaults to `./users.json`)
|
||||
- `batchSize`: (optional) number of users to fetch in each batch (defaults to 100)
|
||||
|
||||
### Import JSON users file to Supabase Auth (Postgres: auth.users) [#import-json-users-file]
|
||||
### Import JSON users file to Supabase Auth (Postgres: `auth.users`) [#import-json-users-file]
|
||||
|
||||
`node import_users.js <path_to_json_file> [<batch_size>]`
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ Example:
|
||||
postgresql://neondb_owner:xxxxxxxxxxxxxxx-random-word-yyyyyyyy.us-west-2.aws.neon.tech/neondb?sslmode=require
|
||||
```
|
||||
|
||||
## Set your OLD_DB_URL environment variable
|
||||
## Set your `OLD_DB_URL` environment variable
|
||||
|
||||
Set the **OLD_DB_URL** environment variable at the command line using your Neon database credentials from the clipboard.
|
||||
|
||||
@@ -38,7 +38,7 @@ export OLD_DB_URL="postgresql://neondb_owner:xxxxxxxxxxxxxxx-random-word-yyyyyyy
|
||||
1. Under **Connection string**, select **URI**, make sure **Display connection pooler** is checked, and **Mode: Session** is set.
|
||||
1. Click the **Copy** button to the right of your connection string to copy it to the clipboard.
|
||||
|
||||
## Set your NEW_DB_URL environment variable
|
||||
## Set your `NEW_DB_URL` environment variable
|
||||
|
||||
Set the **NEW_DB_URL** environment variable at the command line using your Supabase connection string. You will need to replace `[YOUR-PASSWORD]` with your actual database password.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ Copy this part to your clipboard:
|
||||
"postgres://default:xxxxxxxxxxxx@yy-yyyyy-yyyyyy-yyyyyyy.us-west-2.aws.neon.tech:5432/verceldb?sslmode=require"
|
||||
```
|
||||
|
||||
## Set your OLD_DB_URL environment variable
|
||||
## Set your `OLD_DB_URL` environment variable
|
||||
|
||||
Set the **OLD_DB_URL** environment variable at the command line using your Vercel Postgres Database credentials.
|
||||
|
||||
@@ -45,7 +45,7 @@ export OLD_DB_URL="postgres://default:xxxxxxxxxxxx@yy-yyyyy-yyyyyy-yyyyyyy.us-we
|
||||
1. Under **Connection string**, select **URI**, make sure **Display connection pooler** is checked, and **Mode: Session** is set.
|
||||
1. Click the **Copy** button to the right of your connection string to copy it to the clipboard.
|
||||
|
||||
## Set your NEW_DB_URL environment variable
|
||||
## Set your `NEW_DB_URL` environment variable
|
||||
|
||||
Set the **NEW_DB_URL** environment variable at the command line using your Supabase connection string. You will need to replace `[YOUR-PASSWORD]` with your actual database password.
|
||||
|
||||
|
||||
@@ -90,7 +90,7 @@ We only count compute hours for instances that are active. Paused projects do no
|
||||
| 12XL | $3.836 | ~$2800 |
|
||||
| 16XL | $5.12 | ~$3730 |
|
||||
|
||||
### Compute Credits
|
||||
### Compute credits
|
||||
|
||||
Paid plans come with $10 of Compute Credits to cover one Micro instance or parts of any other [Compute Add-On](/docs/guides/platform/compute-add-ons).
|
||||
|
||||
@@ -153,7 +153,7 @@ You can see a breakdown of the different types of egress on your [organization u
|
||||
/>
|
||||
</div>
|
||||
|
||||
## Disk Size
|
||||
## Disk size
|
||||
|
||||
We differentiate between database space usage and disk size. Database Space is the actual amount of space used by all your database objects, whereas disk size is the size of the underlying provisioned disk. Each database has a provisioned disk.
|
||||
|
||||
@@ -178,7 +178,7 @@ Supabase provides two "Free projects". Each project can run a `Nano` instance fo
|
||||
|
||||
We count your total limit of 2 free projects across all organizations you're either an Owner or Administrator of. You could have two Free Plan organizations with one project each, or one Free Plan organization with two projects. Paused projects do not count towards your free project limit.
|
||||
|
||||
## Billing Examples
|
||||
## Billing examples
|
||||
|
||||
Here are some examples on how the Billing affects you.
|
||||
|
||||
|
||||
@@ -60,6 +60,8 @@ A Read Replica is deployed by using a physical backup as a starting point, and a
|
||||
|
||||
Along with the progress of the deployment, the dashboard displays rough estimates for each component.
|
||||
|
||||
{/* supa-mdx-lint-disable-next-line Rule001HeadingCase */}
|
||||
|
||||
### What does it mean when "Init failed" is observed?
|
||||
|
||||
The status `Init failed` indicates that the Read Replica has failed to deploy. Some possible scenarios as to why a Read Replica may have failed to be deployed:
|
||||
|
||||
@@ -25,7 +25,7 @@ Click the _Add_ button then _Enterprise application_.
|
||||
|
||||

|
||||
|
||||
## Step 2: Choose Create your own application [#create-application]
|
||||
## Step 2: Choose to create your own application [#create-application]
|
||||
|
||||
You'll be using the custom enterprise application setup for Supabase.
|
||||
|
||||
@@ -40,7 +40,7 @@ don't find in the gallery (Non-gallery)_.
|
||||
|
||||

|
||||
|
||||
## Step 4: Choose the Set up single sign-on option [#set-up-single-sign-on]
|
||||
## Step 4: Set up single sign-on [#set-up-single-sign-on]
|
||||
|
||||
Before you get to assigning users and groups, which would allow accounts in Azure AD to access Supabase, you need to configure the SAML details that allows Supabase to accept sign in requests from Azure AD.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ Supabase supports single sign-on (SSO) using Google Workspace (formerly known as
|
||||
|
||||

|
||||
|
||||
## Step 2: Choose Add custom SAML app [#add-custom-saml-app]
|
||||
## Step 2: Choose to add custom SAML app [#add-custom-saml-app]
|
||||
|
||||
From the _Add app_ button in the toolbar choose _Add custom SAML app_.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Looking for docs on how to add Single Sign-On support in your Supabase project?
|
||||
|
||||
Supabase supports single sign-on (SSO) using Okta.
|
||||
|
||||
## Step 1: Choose Create App Integration in the Applications dashboard [#create-app-integration]
|
||||
## Step 1: Choose to create an app integration in the applications dashboard [#create-app-integration]
|
||||
|
||||
Navigate to the Applications dashboard of the Okta admin console. Click _Create App Integration_.
|
||||
|
||||
@@ -29,13 +29,13 @@ Supabase supports the SAML 2.0 SSO protocol. Choose it from the _Create a new ap
|
||||
|
||||

|
||||
|
||||
## Step 3: Fill out General Settings [#add-general-settings]
|
||||
## Step 3: Fill out general settings [#add-general-settings]
|
||||
|
||||
The information you enter here is for visibility into your Okta applications menu. You can choose any values you like. `Supabase` as a name works well for most use cases.
|
||||
|
||||

|
||||
|
||||
## Step 4: Fill out SAML Settings [#add-saml-settings]
|
||||
## Step 4: Fill out SAML settings [#add-saml-settings]
|
||||
|
||||
These settings let Supabase use SAML 2.0 properly with your Okta application. Make sure you enter this information exactly as shown on in this table and screenshot.
|
||||
|
||||
@@ -51,7 +51,7 @@ These settings let Supabase use SAML 2.0 properly with your Okta application. Ma
|
||||
|
||||

|
||||
|
||||
## Step 5: Fill out Attribute Statements [#add-attribute-statements]
|
||||
## Step 5: Fill out attribute statements [#add-attribute-statements]
|
||||
|
||||
Attribute Statements allow Supabase to get information about your Okta users on each login.
|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ Breaking changes are generally only present in major version upgrades of Postgre
|
||||
|
||||
If you are upgrading from a significantly older version, you will need to consider the release notes for any intermediary releases as well.
|
||||
|
||||
#### Time Limits
|
||||
#### Time limits
|
||||
|
||||
Starting from 2024-06-24, when a project is paused, users then have a 90-day window to restore the project on the platform from within Supabase Studio.
|
||||
|
||||
@@ -100,7 +100,7 @@ If you upgrade to a paid plan while your project is paused, any expired one-clic
|
||||
src="/docs/img/guides/platform/paused-paid-tier.png"
|
||||
/>
|
||||
|
||||
#### Restoring a Downloaded Backup Locally
|
||||
#### Restoring a downloaded backup locally
|
||||
|
||||
If the 90 day project restore window has expired but you need to access data contained within your project using SQL, you can attempt to restore the project into a local Postgres instance. Supabase publishes tooling that can be used for that purpose. Be aware that this workflow does not produce a complete Supabase environment with REST/Auth/Storage. Instead, it creates a standalone Postgres instance that is maximally compatible with your project's backup file to assist with recovering your data.
|
||||
|
||||
@@ -157,7 +157,7 @@ When upgrading, the Supabase platform will "right-size" your disk based on the c
|
||||
pg_upgrade does not support upgrading of databases containing reg\* data types referencing system OIDs.
|
||||
If you have created any objects that depend on the following extensions, you will need to recreate them after the upgrade.
|
||||
|
||||
#### pg_cron records
|
||||
#### `pg_cron` records
|
||||
|
||||
[pg_cron](https://github.com/citusdata/pg_cron#viewing-job-run-details) does not automatically clean up historical records. This can lead to extremely large `cron.job_run_details` tables if the records are not regularly pruned; you should clean unnecessary records from this table prior to an upgrade.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ A pull-based Queue is a Message storage and delivery system where consumers acti
|
||||
|
||||
A Message in a Queue is a json object that is stored until a consumer explicitly processes and removes it, like a task waiting in a to-do list until someone checks and completes it.
|
||||
|
||||
### Queue Types
|
||||
### Queue types
|
||||
|
||||
Supabase Queues offers three types of Queues:
|
||||
|
||||
@@ -141,7 +141,7 @@ The permissions required for each Queue API database function:
|
||||
|
||||
</Admonition>
|
||||
|
||||
### Enqueuing and Dequeuing Messages
|
||||
### Enqueuing and dequeuing messages
|
||||
|
||||
Once your Queue has been created, you can begin enqueuing and dequeuing Messages.
|
||||
|
||||
|
||||
@@ -294,7 +294,7 @@ changes = supabase.channel('schema-db-changes').on_postgres_changes(
|
||||
|
||||
The channel name can be any string except 'realtime'.
|
||||
|
||||
### Listening to INSERT events
|
||||
### Listening to `INSERT` events
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -387,7 +387,7 @@ changes = supabase.channel('schema-db-changes').on_postgres_changes(
|
||||
|
||||
The channel name can be any string except 'realtime'.
|
||||
|
||||
### Listening to UPDATE events
|
||||
### Listening to `UPDATE` events
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
@@ -480,7 +480,7 @@ changes = supabase.channel('schema-db-changes').on_postgres_changes(
|
||||
|
||||
The channel name can be any string except 'realtime'.
|
||||
|
||||
### Listening to DELETE events
|
||||
### Listening to `DELETE` events
|
||||
|
||||
<Tabs
|
||||
scrollable
|
||||
|
||||
@@ -5,8 +5,6 @@ subtitle: "Learn how to configure and deploy Supabase with Docker."
|
||||
tocVideo: "FqiQKRKsfZE"
|
||||
---
|
||||
|
||||
|
||||
|
||||
Docker is the easiest way to get started with self-hosted Supabase. It should only take you a few minutes to get up and running. This guide assumes you are running the command from the machine you intend to host from.
|
||||
|
||||
## Contents
|
||||
@@ -16,7 +14,6 @@ Docker is the easiest way to get started with self-hosted Supabase. It should on
|
||||
4. [Updating your services](#updating-your-services)
|
||||
5. [Securing your services](#securing-your-services)
|
||||
|
||||
|
||||
## Before you begin
|
||||
|
||||
You need the following installed in your system: [Git](https://git-scm.com/downloads) and Docker ([Windows](https://docs.docker.com/desktop/install/windows-install/), [MacOS](https://docs.docker.com/desktop/install/mac-install/), or [Linux](https://docs.docker.com/desktop/install/linux-install/)).
|
||||
@@ -75,16 +72,12 @@ docker compose up -d
|
||||
</TabPanel>
|
||||
</Tabs>
|
||||
|
||||
|
||||
<Admonition>
|
||||
|
||||
If you are using rootless docker, edit `.env` and set `DOCKER_SOCKET_LOCATION` to your docker socket location. For example: `/run/user/1000/docker.sock`. Otherwise, you will see an error like `container supabase-vector exited (0)`.
|
||||
|
||||
|
||||
</Admonition>
|
||||
|
||||
|
||||
|
||||
After all the services have started you can see them running in the background:
|
||||
|
||||
```sh
|
||||
@@ -93,16 +86,13 @@ docker compose ps
|
||||
|
||||
All of the services should have a status `running (healthy)`. If you see a status like `created` but not `running`, try starting that service manually with `docker compose start <service-name>`.
|
||||
|
||||
|
||||
<Admonition type="danger">
|
||||
|
||||
Your app is now running with default credentials.
|
||||
Please [secure your services](#securing-your-services) as soon as possible using the instructions below.
|
||||
|
||||
|
||||
</Admonition>
|
||||
|
||||
|
||||
### Accessing Supabase Studio
|
||||
|
||||
You can access Supabase Studio through the API gateway on port `8000`. For example: `http://<your-ip>:8000`, or [localhost:8000](http://localhost:8000) if you are running Docker locally.
|
||||
@@ -167,7 +157,6 @@ You'll want to update the Studio(Dashboard) frequently to get the latest feature
|
||||
|
||||
While we provided you with some example secrets for getting started, you should NEVER deploy your Supabase setup using the defaults we have provided. Please follow all of the steps in this section to ensure you have a secure setup, and then [restart all services](#restarting-all-services) to pick up the changes.
|
||||
|
||||
|
||||
### Generate API keys
|
||||
We need to generate secure keys for accessing your services. We'll use the `JWT Secret` to generate `anon` and `service` API keys using the form below.
|
||||
|
||||
@@ -175,7 +164,6 @@ We need to generate secure keys for accessing your services. We'll use the `JWT
|
||||
2. **Store Securely**: Save the secret in a secure location on your local machine. Don't share this secret publicly or commit it to version control.
|
||||
3. **Generate a JWT**: Use the form below to generate a new `JWT` using your secret.
|
||||
|
||||
|
||||
<JwtGenerator />
|
||||
|
||||
### Update API keys
|
||||
@@ -422,8 +410,7 @@ docker rm supabase-analytics
|
||||
|
||||
## Demo
|
||||
|
||||
A minimal setup working on Ubuntu, hosted on Digital Ocean.
|
||||
|
||||
A minimal setup working on Ubuntu, hosted on DigitalOcean.
|
||||
|
||||
<div className="video-container">
|
||||
<iframe
|
||||
@@ -434,10 +421,7 @@ A minimal setup working on Ubuntu, hosted on Digital Ocean.
|
||||
></iframe>
|
||||
</div>
|
||||
|
||||
### Demo using Digital Ocean
|
||||
1. A Digital Ocean Droplet with 1 GB memory and 25 GB solid-state drive (SSD) is sufficient to start
|
||||
### Demo using DigitalOcean
|
||||
1. A DigitalOcean Droplet with 1 GB memory and 25 GB solid-state drive (SSD) is sufficient to start
|
||||
2. To access the Dashboard, use the ipv4 IP address of your Droplet.
|
||||
3. If you're unable to access Dashboard, run `docker compose ps` to see if the Studio service is running and healthy.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ subtitle: 'Learn about the Storage error codes and how to resolve them'
|
||||
sidebar_label: 'Debugging'
|
||||
---
|
||||
|
||||
## Storage Error Codes
|
||||
## Storage error codes
|
||||
|
||||
<Admonition type="note">
|
||||
We are transitioning to a new error code system. For backwards compatibility you'll still be able
|
||||
@@ -60,7 +60,7 @@ Here is the full list of error codes and their descriptions:
|
||||
| MissingPart | A part of the entity is missing. | 400 | Ensure all parts of the entity are included in the request before completing the operation. |
|
||||
| SlowDown | The request rate is too high and has been throttled. | 503 | Reduce the request rate or implement exponential backoff and retry mechanisms to handle throttling. |
|
||||
|
||||
## Legacy Error Codes
|
||||
## Legacy error codes
|
||||
|
||||
As we are transitioning to a new error code system, you might still see the following error format:
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ In this guide, you will learn how to create and use custom roles with Storage to
|
||||
|
||||
Supabase Storage uses the same role-based access control system as any other Supabase service using RLS (Row Level Security).
|
||||
|
||||
## Create a Custom Role
|
||||
## Create a custom role
|
||||
|
||||
Let's create a custom role `manager` to provide full read access to a specific bucket. For a more advanced setup, see the [RBAC Guide](/docs/guides/auth/custom-claims-and-role-based-access-control-rbac#create-auth-hook-to-apply-user-role).
|
||||
|
||||
|
||||
@@ -6,13 +6,13 @@ subtitle: 'Bandwidth & Storage Egress'
|
||||
sidebar_label: 'Bandwidth & Storage Egress'
|
||||
---
|
||||
|
||||
## Bandwidth & Storage Egress
|
||||
## Bandwidth & Storage egress
|
||||
|
||||
Free Plan Organizations in Supabase have a limit of 5 GB of bandwidth. This limit is calculated by the sum of all the data transferred from the Supabase servers to the client. This includes all the data transferred from the database, storage, and functions.
|
||||
|
||||
### Checking Storage Egress Requests in Log Explorer:
|
||||
### Checking Storage egress requests in Logs Explorer:
|
||||
|
||||
We have a template query that you can use to get the number of requests for each object in [Log Explorer](/dashboard/project/_/logs/explorer/templates).
|
||||
We have a template query that you can use to get the number of requests for each object in [Logs Explorer](/dashboard/project/_/logs/explorer/templates).
|
||||
|
||||
```sql
|
||||
select
|
||||
@@ -45,7 +45,7 @@ Example of the output:
|
||||
]
|
||||
```
|
||||
|
||||
### Calculating Egress:
|
||||
### Calculating egress:
|
||||
|
||||
If you already know the size of those files, you can calculate the egress by multiplying the number of requests by the size of the file.
|
||||
You can also get the size of the file with the following cURL:
|
||||
@@ -67,6 +67,6 @@ Total Egress = 395.76MB
|
||||
|
||||
You can see that these values can get quite large, so it's important to keep track of the egress and optimize the files.
|
||||
|
||||
### Optimizing Egress:
|
||||
### Optimizing egress:
|
||||
|
||||
If you are on the Pro Plan, you can use the [Supabase Image Transformations](/docs/guides/storage/image-transformations) to optimize the images and reduce the egress.
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
# Can also specify a regex that is compatible with the [Rust regex crate](https://docs.rs/regex/latest/regex/).
|
||||
may_uppercase = [
|
||||
"[A-Z0-9]{2,5}s?",
|
||||
"Add-ons?",
|
||||
"Amazon RDS",
|
||||
"APIs",
|
||||
"Analytics",
|
||||
@@ -30,7 +31,10 @@ may_uppercase = [
|
||||
"Cloudflare Workers?",
|
||||
"Code Exchange",
|
||||
"Colab",
|
||||
"Compute",
|
||||
"Compute Hours",
|
||||
"Content Delivery Network",
|
||||
"Cron",
|
||||
"Cron Jobs?",
|
||||
"Data API",
|
||||
"DataDog",
|
||||
@@ -45,10 +49,12 @@ may_uppercase = [
|
||||
"Docker",
|
||||
"Drizzle",
|
||||
"Edge Functions?",
|
||||
"Enterprise",
|
||||
"Enterprise Plan",
|
||||
"Expo",
|
||||
"Facebook",
|
||||
"Facebook Developers?",
|
||||
"Fair Use Policy",
|
||||
"Figma",
|
||||
"Figma Developers?",
|
||||
"Firebase",
|
||||
@@ -57,12 +63,14 @@ may_uppercase = [
|
||||
"Flutter",
|
||||
"Functions?",
|
||||
"Free Plan",
|
||||
"Frequently Asked Questions",
|
||||
"Git",
|
||||
"GitHub",
|
||||
"GitHub Actions",
|
||||
"GitLab",
|
||||
"GoTrue",
|
||||
"Google",
|
||||
"Google Workspace",
|
||||
"Grafana",
|
||||
"GraphQL",
|
||||
"Heroku",
|
||||
@@ -98,6 +106,7 @@ may_uppercase = [
|
||||
"Llamafile",
|
||||
"Logs Explorer",
|
||||
"Magic Link",
|
||||
"Management API",
|
||||
"Mixpeek",
|
||||
"Mixpeek Embed",
|
||||
"MySQL",
|
||||
@@ -126,10 +135,12 @@ may_uppercase = [
|
||||
"Presence",
|
||||
"Prometheus",
|
||||
"Python",
|
||||
"Queues?",
|
||||
"Query Performance",
|
||||
"React",
|
||||
"React Email",
|
||||
"React Native",
|
||||
"Read Replicas?",
|
||||
"Reciprocal Ranked Fusion",
|
||||
"Redis",
|
||||
"RedwoodJS",
|
||||
@@ -146,7 +157,9 @@ may_uppercase = [
|
||||
"Single Sign-On",
|
||||
"Slack",
|
||||
"Slack Developers?",
|
||||
"Social Login",
|
||||
"SolidJS",
|
||||
"Spend Cap",
|
||||
"Spotify",
|
||||
"Spotify Developers?",
|
||||
"Sqitch",
|
||||
|
||||
Reference in New Issue
Block a user