chore(docs): udpated README.md and CONTRIBUTING.md (#3587)

This commit is contained in:
David Barroso
2025-10-13 09:04:28 +02:00
committed by GitHub
parent 5b809f6ac9
commit 0c8e5ac55f
34 changed files with 584 additions and 1118 deletions

View File

@@ -56,7 +56,7 @@ jobs:
PATH: nixops
GIT_REF: ${{ github.sha }}
VERSION: 0.0.0-dev # we use a fixed version here to avoid unnecessary rebuilds
DOCKER: false
DOCKER: true
secrets:
AWS_ACCOUNT_ID: ${{ secrets.AWS_PRODUCTION_CORE_ACCOUNT_ID }}
NIX_CACHE_PUB_KEY: ${{ secrets.NIX_CACHE_PUB_KEY }}

View File

@@ -0,0 +1,35 @@
---
name: "nixops: release"
on:
push:
branches:
- main
paths:
- 'flake.lock'
- 'nixops/project.nix'
jobs:
build_artifacts:
uses: ./.github/workflows/wf_build_artifacts.yaml
with:
NAME: nixops
PATH: nixops
GIT_REF: ${{ inputs.GIT_REF }}
VERSION: latest
DOCKER: true
secrets:
AWS_ACCOUNT_ID: ${{ secrets.AWS_PRODUCTION_CORE_ACCOUNT_ID }}
NIX_CACHE_PUB_KEY: ${{ secrets.NIX_CACHE_PUB_KEY }}
NIX_CACHE_PRIV_KEY: ${{ secrets.NIX_CACHE_PRIV_KEY }}
push-docker:
uses: ./.github/workflows/wf_docker_push_image.yaml
needs:
- build_artifacts
with:
NAME: nixops
PATH: nixops
VERSION: latest
secrets:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}

View File

@@ -24,28 +24,20 @@ If you find an Issue that addresses the problem you're having, please add your r
### Pull Requests
Please have a look at our [developers guide](https://github.com/nhost/nhost/blob/main/DEVELOPERS.md) to start coding!
PRs to our libraries are always welcome and can be a quick way to get your fix or improvement slated for the next release. In general, PRs should:
- Only fix/add the functionality in question **OR** address wide-spread whitespace/style issues, not both.
- Add unit or integration tests for fixed or changed functionality (if a test suite exists).
- Address a single concern in the least number of changed lines as possible.
- Include documentation in the repo or on our [docs site](https://docs.nhost.io).
- Be accompanied by a complete Pull Request template (loaded automatically when a PR is created).
## Monorepo Structure
For changes that address core functionality or require breaking changes (e.g., a major release), it's best to open an Issue to discuss your proposal first. This is not required but can save time creating and reviewing changes.
This repository is a monorepo that contains multiple packages and applications. The structure is as follows:
In general, we follow the ["fork-and-pull" Git workflow](https://github.com/susam/gitpr)
- `cli` - The Nhost CLI
- `dashboard` - The Nhost Dashboard
- `docs` - Documentation
- `examples` - Various example projects
- `packages/nhost-js` - The Nhost JavaScript/TypeScript SDK
- `services/auth` - Nhost Authentication service
- `services/storage` - Nhost Storage service
- `tools/codegen` - Internal code generation tool to build the SDK
- `tools/mintlify-openapi` - Internal tool to generate reference documentation for Mintlify from an OpenAPI spec.
1. Fork the repository to your own Github account
2. Clone the project to your machine
3. Create a branch locally with a succinct but descriptive name. All changes should be part of a branch and submitted as a pull request - your branches should be prefixed with one of:
- `bug/` for bug fixes
- `feat/` for features
- `chore/` for configuration changes
- `docs/` for documentation changes
4. Commit changes to the branch
5. Following any formatting and testing guidelines specific to this repo
6. Push changes to your fork
7. Open a PR in our repository and follow the PR template to review the changes efficiently.
For details about those projects and how to contribure, please refer to their respective `README.md` and `CONTRIBUTING.md` files.

View File

@@ -1,100 +0,0 @@
# Developer Guide
## Requirements
### Node.js v20 or later
### [pnpm](https://pnpm.io/) package manager
The easiest way to install `pnpm` if it's not installed on your machine yet is to use `npm`:
```sh
$ npm install -g pnpm
```
### [Nhost CLI](https://docs.nhost.io/platform/cli/local-development)
- The CLI is primarily used for running the E2E tests
- Please refer to the [installation guide](https://docs.nhost.io/platform/cli/local-development) if you have not installed it yet
## File Structure
The repository is organized as a monorepo, with the following structure (only relevant folders are shown):
```
assets/ # Assets used in the README
config/ # Configuration files for the monorepo
dashboard/ # Dashboard
docs/ # Documentation website
examples/ # Example projects
packages/ # Core packages
integrations/ # These are packages that rely on the core packages
```
## Get started
### Installation
First, clone this repository:
```sh
git clone https://github.com/nhost/nhost
```
Then, install the dependencies with `pnpm`:
```sh
$ cd nhost
$ pnpm install
```
### Development
Although package references are correctly updated on the fly for TypeScript, example projects and the dashboard won't see the changes because they are depending on the build output. To fix this, you can run packages in development mode.
Running packages in development mode from the root folder is as simple as:
```sh
$ pnpm dev
```
Our packages are linked together using [PNPM's workspace](https://pnpm.io/workspaces) feature. Next.js and Vite automatically detect changes in the dependencies and rebuild everything, so the changes will be reflected in the examples and the dashboard.
**Note:** It's possible that Next.js or Vite throw an error when you run `pnpm dev`. Restarting the process should fix it.
### Use Examples
Examples are a great way to test your changes in practice. Make sure you've `pnpm dev` running in your terminal and then run an example.
Let's follow the instructions to run [react-apollo example](https://github.com/nhost/nhost/blob/main/examples/react-apollo/README.md).
## Edit Documentation
The easier way to contribute to our documentation is to go to the `docs` folder and follow the [instructions to start local development](https://github.com/nhost/nhost/blob/main/docs/README.md):
```sh
$ cd docs
# not necessary if you've already done this step somewhere in the repository
$ pnpm install
$ pnpm start
```
## Run Test Suites
### Unit Tests
You can run the unit tests with the following command from the repository root:
```sh
$ pnpm test
```
### E2E Tests
Each package that defines end-to-end tests embeds their own Nhost configuration, that will be automatically when running the tests. As a result, you must make sure you are not running the Nhost CLI before running the tests.
You can run the e2e tests with the following command from the repository root:
```sh
$ pnpm e2e
```

16
Makefile Normal file
View File

@@ -0,0 +1,16 @@
.PHONY: envrc-install
envrc-install: ## Copy envrc.sample to all project folders
@for f in $$(find . -name "project.nix"); do \
echo "Copying envrc.sample to $$(dirname $$f)/.envrc"; \
cp ./envrc.sample $$(dirname $$f)/.envrc; \
done
.PHONY: nixops-container-env
nixops-container-env: ## Enter a NixOS container environment
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ./:/build \
-w /build \
nixops:0.0.0-dev \
bash

View File

@@ -12,7 +12,7 @@
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://nhost.io/blog">Blog</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://twitter.com/nhost">Twitter</a>
<a href="https://x.com/nhost">X</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://nhost.io/discord">Discord</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
@@ -36,7 +36,7 @@ Nhost consists of open source software:
- Authentication: [Auth](https://github.com/nhost/nhost/tree/main/services/auth)
- Storage: [Storage](https://github.com/nhost/nhost/tree/main/services/storage)
- Serverless Functions: Node.js (JavaScript and TypeScript)
- [Nhost CLI](https://docs.nhost.io/platform/cli/local-development) for local development
- [Nhost CLI](https://github.com/nhost/nhost/tree/main/cli) for local development
## Architecture of Nhost
@@ -107,7 +107,6 @@ Nhost is frontend agnostic, which means Nhost works with all frontend frameworks
# Resources
- Start developing locally with the [Nhost CLI](https://docs.nhost.io/platform/cli/local-development)
## Nhost Clients
- [JavaScript/TypeScript](https://docs.nhost.io/reference/javascript/nhost-js/main)

View File

@@ -54,6 +54,11 @@ get-version: ## Return version
@echo $(VERSION)
.PHONY: develop
develop: ## Start a nix develop shell
nix develop .\#$(NAME)
.PHONY: _check-pre
_check-pre: ## Pre-checks before running nix flake check
@@ -105,6 +110,11 @@ build-docker-image: ## Build docker container for native architecture
skopeo copy --insecure-policy dir:./result docker-daemon:$(NAME):$(VERSION)
.PHONY: build-docker-image-import-bare
build-docker-image-import-bare:
skopeo copy --insecure-policy dir:./result docker-daemon:$(NAME):$(VERSION)
.PHONY: dev-env-up
dev-env-up: _dev-env-build _dev-env-up ## Starts development environment

84
cli/CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,84 @@
# Developer Guide
## Requirements
We use nix to manage the development environment, the build process and for running tests.
### With Nix (Recommended)
Run `nix develop \#cli` to get a complete development environment.
### Without Nix
Check `project.nix` (checkDeps, buildInputs, buildNativeInputs) for manual dependency installation. Alternatively, you can run `make nixops-container-env` in the root of the repository to enter a Docker container with nix and all dependencies pre-installed (note it is a large image).
## Development Workflow
### Running Tests
**With Nix:**
```bash
make dev-env-up
make check
```
**Without Nix:**
```bash
# Start development environment
make dev-env-up
# Lint Go code
golangci-lint run ./...
# Run tests
go test -v ./...
```
### Formatting
Format code before committing:
```bash
golines -w --base-formatter=gofumpt .
```
## Building
### Local Build
Build the project (output in `./result`):
```bash
make build
```
### Docker Image
Build and import Docker image with skopeo:
```bash
make build-docker-image
```
If you run the command above inside the dockerized nixops-container-env and you get an error like:
```
FATA[0000] writing blob: io: read/write on closed pipe
```
then you need to run the following command outside of the container (needs skopeo installed on the host):
```bash
cd cli
make build-docker-image-import-bare
```
### Multi-Platform Builds
Build for multiple platforms (Darwin/Linux, ARM64/AMD64):
```bash
make build-multiplatform
```
This produces binaries for:
- darwin/arm64
- darwin/amd64
- linux/arm64
- linux/amd64

View File

@@ -147,6 +147,22 @@
"/products/auth/idtokens"
]
},
{
"group": "Workflows",
"icon": "diagram-project",
"pages": [
"/products/auth/workflows/email-password",
"/products/auth/workflows/oauth-providers",
"/products/auth/workflows/passwordless-email",
"/products/auth/workflows/passwordless-sms",
"/products/auth/workflows/webauthn",
"/products/auth/workflows/anonymous-users",
"/products/auth/workflows/change-email",
"/products/auth/workflows/change-password",
"/products/auth/workflows/reset-password",
"/products/auth/workflows/refresh-token"
]
},
{
"group": "Security",
"icon": "shield",

View File

@@ -1,5 +1,3 @@
# Anonymous Users
## Sign-in anonymously
```mermaid

View File

@@ -1,5 +1,3 @@
# Change email
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Change password
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Sign up and sign in users with email and password
## Sign up
```mermaid

View File

@@ -1,5 +1,3 @@
# Oauth social providers
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Passwordless with emails (magic links)
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Passwordless with SMS
```mermaid
sequenceDiagram
autonumber
@@ -23,4 +21,4 @@ sequenceDiagram
## Test phone numbers
Environmental variable `AUTH_SMS_TEST_PHONE_NUMBERS` can be set with a comma separated test phone numbers. When sign in
is invoked the the SMS message with the verification code will be available in the logs. This way you can also test your SMS templates.
is invoked the the SMS message with the verification code will be available in the logs. This way you can also test your SMS templates.

View File

@@ -1,5 +1,3 @@
# Refresh tokens
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Reset password
```mermaid
sequenceDiagram
autonumber

View File

@@ -1,5 +1,3 @@
# Security Keys with WebAuthn
Auth implements the WebAuthn protocol to sign in with security keys, also referred as authenticators in the WebAuthn protocol.
A user needs first to sign up with another method, for instance email+password, passwordless email or Oauth, then to add their security key to their account.

6
envrc.sample Normal file
View File

@@ -0,0 +1,6 @@
watch_file ../../flake.nix ../../nix/*.nix project.nix ./.envrc.custom
use flake .\#$(basename $PWD)
if [[ -f .envrc.custom ]]; then
. ./.envrc.custom
fi

View File

@@ -61,7 +61,7 @@
};
nixopsf = import ./nixops/project.nix {
inherit self pkgs nix-filter nixops-lib;
inherit self pkgs nix2containerPkgs nix-filter nixops-lib;
};
storagef = import ./services/storage/project.nix {
@@ -185,6 +185,7 @@
mintlify-openapi = mintlify-openapif.package;
nhost-js = nhost-jsf.package;
nixops = nixopsf.package;
nixops-docker-image = nixopsf.dockerImage;
storage = storagef.package;
storage-docker-image = storagef.dockerImage;
clamav-docker-image = storagef.clamav-docker-image;

View File

@@ -1,7 +1,8 @@
{ self, pkgs, nix-filter, nixops-lib }:
{ self, pkgs, nix2containerPkgs, nix-filter, nixops-lib }:
let
name = "nixops";
version = "0.0.0-dev";
created = "1970-01-01T00:00:00Z";
submodule = "${name}";
src = nix-filter.lib.filter {
@@ -34,18 +35,65 @@ let
gqlgenc
oapi-codegen
nhost-cli
gofumpt
golines
skopeo
postgresql_14_18-client
postgresql_15_13-client
postgresql_16_9-client
postgresql_17_5-client
postgresql_14_18
postgresql_15_13
postgresql_16_9
postgresql_17_5
sqlc
vacuum-go
bun
clang
pkg-config
];
nativeBuildInputs = [ ];
user = "user";
group = "user";
uid = "1000";
gid = "1000";
l = pkgs.lib // builtins;
mkUser = pkgs.runCommand "mkUser" { } ''
mkdir -p $out/etc/pam.d
echo "${user}:x:${uid}:${gid}::" > $out/etc/passwd
echo "${user}:!x:::::::" > $out/etc/shadow
echo "${group}:x:${gid}:" > $out/etc/group
echo "${group}:x::" > $out/etc/gshadow
cat > $out/etc/pam.d/other <<EOF
account sufficient pam_unix.so
auth sufficient pam_rootok.so
password requisite pam_unix.so nullok sha512
session required pam_unix.so
EOF
touch $out/etc/login.defs
mkdir -p $out/home/${user}
'';
tmpFolder = (pkgs.writeTextFile {
name = "tmp-file";
text = ''
dummy file to generate tmpdir
'';
destination = "/tmp/tmp-file";
});
nixConfig = pkgs.writeTextFile {
name = "nix-config";
text = ''
sandbox = false
sandbox-fallback = false
experimental-features = nix-command flakes
trusted-users = root ${user}
'';
destination = "/etc/nix/nix.conf";
};
in
{
check = nixops-lib.nix.check { inherit src; };
@@ -66,5 +114,78 @@ in
cp -r ${src} $out/
'';
};
}
dockerImage = pkgs.runCommand "image-as-dir" { } ''
${(nix2containerPkgs.nix2container.buildImage {
inherit name created;
tag = version;
maxLayers = 100;
initializeNixDatabase = true;
nixUid = l.toInt uid;
nixGid = l.toInt gid;
copyToRoot = [
(pkgs.buildEnv {
name = "image";
paths = [
(pkgs.buildEnv {
name = "root";
paths = with pkgs; [
coreutils
nix
bash
gnugrep
gnumake
];
pathsToLink = "/bin";
})
];
})
nixConfig
tmpFolder
mkUser
];
perms = [
{
path = mkUser;
regex = "/home/${user}";
mode = "0744";
uid = l.toInt uid;
gid = l.toInt gid;
uname = user;
gname = group;
}
{
path = tmpFolder;
regex = "/tmp";
mode = "0777";
uid = l.toInt uid;
gid = l.toInt gid;
uname = user;
gname = group;
}
];
config = {
User = "user";
WorkingDir = "/home/user";
Env = [
"NIX_PAGER=cat"
"USER=nobody"
"HOME=/home/user"
"TMPDIR=/tmp"
"SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt"
];
};
layers = [
(nix2containerPkgs.nix2container.buildLayer {
deps = buildInputs;
})
];
}).copyTo}/bin/copy-to dir:$out
'';
}

View File

@@ -0,0 +1,77 @@
# Developer Guide
## Requirements
We use nix to manage the development environment, the build process and for running tests.
### With Nix (Recommended)
Run `nix develop \#nhost-js` to get a complete development environment.
### Without Nix
Check `project.nix` (checkDeps, buildInputs, buildNativeInputs) for manual dependency installation. Alternatively, you can run `make nixops-container-env` in the root of the repository to enter a Docker container with nix and all dependencies pre-installed (note it is a large image).
## Development Workflow
### Running Tests
**With Nix:**
```bash
make dev-env-up
make check
```
**Without Nix:**
```bash
make dev-env-up
pnpm install
pnpm test
```
### Formatting
Format code before committing:
```bash
pnpm format
```
### Code Generation
Generate TypeScript clients from OpenAPI specs:
```bash
pnpm generate
```
This runs `./gen.sh` which generates code from:
- `services/auth/docs/openapi.yaml` - Auth service API
- `services/storage/controller/openapi.yaml` - Storage service API
## Building
### Build for Distribution
```bash
pnpm build
```
This produces:
- TypeScript type definitions
- ESM bundles (`.es.js`)
- CommonJS bundles (`.cjs.js`)
- UMD bundles for browser usage
Output is placed in the `dist/` directory.
## Development Notes
### Code Generation
The code generation script (`gen.sh`) reads OpenAPI specifications from the auth and storage services and generates TypeScript clients. Always regenerate after API changes.
### Dependencies
This package has minimal runtime dependencies to keep bundle size small. Only `tslib` is included as a production dependency.

View File

@@ -0,0 +1,84 @@
# Developer Guide
## Requirements
We use nix to manage the development environment, the build process and for running tests.
### With Nix (Recommended)
Run `nix develop \#auth` to get a complete development environment.
### Without Nix
Check `project.nix` (checkDeps, buildInputs, buildNativeInputs) for manual dependency installation. Alternatively, you can run `make nixops-container-env` in the root of the repository to enter a Docker container with nix and all dependencies pre-installed (note it is a large image).
## Development Workflow
### Running Tests
**With Nix:**
```bash
make dev-env-up
make check
```
**Without Nix:**
```bash
# Start development environment
make dev-env-up
# Lint OpenAPI spec
vacuum lint \
-dqb -n info \
--ruleset vacuum.yaml \
docs/openapi.yaml
# Generate code
go generate ./...
# Lint Go code
golangci-lint run ./...
# Run tests
go test -v ./...
# Run e2e tests
bun install
bun test
```
### Formatting
Format code before committing:
```bash
golines -w --base-formatter=gofumpt .
```
## Building
### Local Build
Build the project (output in `./result`):
```bash
make build
```
### Docker Image
Build and import Docker image with skopeo:
```bash
make build-docker-image
```
If you run the command above inside the dockerized nixops-container-env and you get an error like:
```
FATA[0000] writing blob: io: read/write on closed pipe
```
then you need to run the following command outside of the container (needs skopeo installed on the host):
```bash
cd cli
make build-docker-image-import-bare
```

View File

@@ -11,69 +11,16 @@
## Sign in methods
- [**Email and Password**](./docs/workflows/email-password.md) - simple email and password method.
- [**Email**](./docs/workflows/passwordless-email.md) - also called **passwordless email** or **magic link**.
- [**SMS**](./docs/workflows/passwordless-sms.md) - also called **passwordless sms**.
- [**Anonymous**](./docs/workflows/anonymous-users.md) - sign in users without any method. Anonymous users can be
- **Email and Password** - simple email and password method.
- **Email** - also called **passwordless email** or **magic link**.
- **SMS** - also called **passwordless sms**.
- **Anonymous** - sign in users without any method. Anonymous users can be
converted to _regular_ users.
- [**OAuth providers**](./docs/workflows/oauth-providers.md): Facebook, Google, GitHub, Twitter, Apple, Azure AD, LinkedIn, Windows Live, Spotify, Strava, GitLab, BitBucket, Discord, WorkOS.
- [**Security keys with WebAuthn**](./docs/workflows/webauthn.md)
- **OAuth providers**: Facebook, Google, GitHub, Twitter, Apple, Azure AD, LinkedIn, Windows Live, Spotify, Strava, GitLab, BitBucket, Discord, WorkOS.
- **Security keys with WebAuthn**
- Others...
## Deploy Auth in Seconds
## Documentation
Use [Nhost](https://nhost.io) to start using Hasura Auth in seconds.
### Using Docker-compose
```sh
git clone https://github.com/nhost/nhost.git
cd services/auth/build/docker-compose
docker compose up
```
## Configuration
Read our [configuration guide](./docs/configuration.md) to customise the Hasura Auth settings.
## Workflows
- [Email and password](./docs/workflows/email-password.md)
- [Oauth social providers](./docs/workflows/oauth-providers.md)
- [Passwordless with emails (magic links)](./docs/workflows/passwordless-email.md)
- [Passwordless with SMS](./docs/workflows/passwordless-sms.md)
- [Anonymous users](./docs/workflows/anonymous-users.md)
- [Change email](./docs/workflows/change-email.md)
- [Change password](./docs/workflows/change-password.md)
- [Reset password](./docs/workflows/reset-password.md)
- [Refresh tokens](./docs/workflows/refresh-token.md)
- [Security keys with WebAuthn](./docs/workflows/webauthn.md)
## JWT Signing
The JWT tokens can be signed with either a symmetric key based on `HMAC-SHA` or with asymmetric keys based on `RSA`. To configure the JWT signing method, set the environment variable `HASURA_GRAPHQL_JWT_SECRET` which should follow the same format as [Hasura](https://hasura.io/docs/latest/graphql/core/auth/authentication/jwt.html#running-with-jwt) with a few considerations:
1. Only `HS` and `RS` algorithms are supported.
2. If using `RS` algorithm, the public key should be in PEM format.
3. If using `RS` algorithm, the private key should be in PKCS#8 format inside an extra field `signing_key`.
4. If using `RS` algorithm, an additional field `kid` can be added to specify the key id in the JWK Set.
When using asymmetric keys, you can get the JWK Set from the endpoing `.well-known/jwks.json`.
## Recipes
- Extending Hasura's permissions with [Custom JWT claims](./docs/recipes/custom-hasura-claims.md)
- [Extending the user schema](./docs/recipes/extending-user-schema.md)
## Reference
- CLI options and configuration available in the [CLI documentation](./docs/cli.md).
- The service comes with an [OpenAPI definition](./docs/openapi.yaml) which you can also see [online](https://editor.swagger.io/?url=https://raw.githubusercontent.com/nhost/hasura-auth/main/docs/openapi.yaml).
- [Database Schema](./docs/schema.md)
## Show your support
Give a ⭐️ if this project helped you!
## 📝 License
This project is MIT licensed.
- [Official Documentation](https://docs.nhost.io/products/auth/overview).
- [OpenAPI schema](https://docs.nhost.io/reference/auth/get--well-known-jwks-json)

View File

@@ -1,490 +0,0 @@
# NAME
auth - Nhost Auth API server
# SYNOPSIS
auth
```
[--access-tokens-expires-in]=[value]
[--allow-redirect-urls]=[value]
[--allowed-email-domains]=[value]
[--allowed-emails]=[value]
[--allowed-locales]=[value]
[--api-prefix]=[value]
[--apple-audience]=[value]
[--apple-client-id]=[value]
[--apple-enabled]
[--apple-key-id]=[value]
[--apple-private-key]=[value]
[--apple-scope]=[value]
[--apple-team-id]=[value]
[--azuread-client-id]=[value]
[--azuread-client-secret]=[value]
[--azuread-enabled]
[--azuread-scope]=[value]
[--azuread-tenant]=[value]
[--bitbucket-client-id]=[value]
[--bitbucket-client-secret]=[value]
[--bitbucket-enabled]
[--bitbucket-scope]=[value]
[--block-email-domains]=[value]
[--block-emails]=[value]
[--client-url]=[value]
[--conceal-errors]
[--custom-claims-defaults]=[value]
[--custom-claims]=[value]
[--debug]
[--default-allowed-roles]=[value]
[--default-locale]=[value]
[--default-role]=[value]
[--disable-new-users]
[--disable-signup]
[--discord-client-id]=[value]
[--discord-client-secret]=[value]
[--discord-enabled]
[--discord-scope]=[value]
[--email-passwordless-enabled]
[--email-verification-required]
[--enable-anonymous-users]
[--enable-change-env]
[--entraid-client-id]=[value]
[--entraid-client-secret]=[value]
[--entraid-enabled]
[--entraid-scope]=[value]
[--entraid-tenant]=[value]
[--facebook-client-id]=[value]
[--facebook-client-secret]=[value]
[--facebook-enabled]
[--facebook-scope]=[value]
[--github-authorization-url]=[value]
[--github-client-id]=[value]
[--github-client-secret]=[value]
[--github-enabled]
[--github-scope]=[value]
[--github-token-url]=[value]
[--github-user-profile-url]=[value]
[--gitlab-client-id]=[value]
[--gitlab-client-secret]=[value]
[--gitlab-enabled]
[--gitlab-scope]=[value]
[--google-audience]=[value]
[--google-client-id]=[value]
[--google-client-secret]=[value]
[--google-enabled]
[--google-scope]=[value]
[--graphql-url]=[value]
[--gravatar-default]=[value]
[--gravatar-enabled]
[--gravatar-rating]=[value]
[--hasura-admin-secret]=[value]
[--hasura-graphql-jwt-secret]=[value]
[--help|-h]
[--linkedin-client-id]=[value]
[--linkedin-client-secret]=[value]
[--linkedin-enabled]
[--linkedin-scope]=[value]
[--log-format-text]
[--mfa-enabled]
[--mfa-totp-issuer]=[value]
[--otp-email-enabled]
[--password-hibp-enabled]
[--password-min-length]=[value]
[--port]=[value]
[--postgres-migrations]=[value]
[--postgres]=[value]
[--rate-limit-brute-force-burst]=[value]
[--rate-limit-brute-force-interval]=[value]
[--rate-limit-email-burst]=[value]
[--rate-limit-email-interval]=[value]
[--rate-limit-email-is-global]
[--rate-limit-enable]
[--rate-limit-global-burst]=[value]
[--rate-limit-global-interval]=[value]
[--rate-limit-memcache-prefix]=[value]
[--rate-limit-memcache-server]=[value]
[--rate-limit-signups-burst]=[value]
[--rate-limit-signups-interval]=[value]
[--rate-limit-sms-burst]=[value]
[--rate-limit-sms-interval]=[value]
[--refresh-token-expires-in]=[value]
[--require-elevated-claim]=[value]
[--server-url]=[value]
[--sms-modica-password]=[value]
[--sms-modica-username]=[value]
[--sms-passwordless-enabled]
[--sms-provider]=[value]
[--sms-twilio-account-sid]=[value]
[--sms-twilio-auth-token]=[value]
[--sms-twilio-messaging-service-id]=[value]
[--smtp-api-header]=[value]
[--smtp-auth-method]=[value]
[--smtp-host]=[value]
[--smtp-password]=[value]
[--smtp-port]=[value]
[--smtp-secure]
[--smtp-sender]=[value]
[--smtp-user]=[value]
[--spotify-client-id]=[value]
[--spotify-client-secret]=[value]
[--spotify-enabled]
[--spotify-scope]=[value]
[--strava-client-id]=[value]
[--strava-client-secret]=[value]
[--strava-enabled]
[--strava-scope]=[value]
[--templates-path]=[value]
[--turnstile-secret]=[value]
[--twitch-client-id]=[value]
[--twitch-client-secret]=[value]
[--twitch-enabled]
[--twitch-scope]=[value]
[--twitter-consumer-key]=[value]
[--twitter-consumer-secret]=[value]
[--twitter-enabled]
[--webauthn-attestation-timeout]=[value]
[--webauthn-enabled]
[--webauthn-rp-id]=[value]
[--webauthn-rp-name]=[value]
[--webauthn-rp-origins]=[value]
[--windowslive-client-id]=[value]
[--windowslive-client-secret]=[value]
[--windowslive-enabled]
[--windowslive-scope]=[value]
[--workos-client-id]=[value]
[--workos-client-secret]=[value]
[--workos-default-connection]=[value]
[--workos-default-domain]=[value]
[--workos-default-organization]=[value]
[--workos-enabled]
```
**Usage**:
```
auth [GLOBAL OPTIONS] [command [COMMAND OPTIONS]] [ARGUMENTS...]
```
# GLOBAL OPTIONS
**--access-tokens-expires-in**="": Access tokens expires in (seconds) (default: 900)
**--allow-redirect-urls**="": Allowed redirect URLs (default: [])
**--allowed-email-domains**="": Comma-separated list of email domains that can register (default: [])
**--allowed-emails**="": Comma-separated list of emails that can register (default: [])
**--allowed-locales**="": Allowed locales (default: [en])
**--api-prefix**="": prefix for all routes
**--apple-audience**="": Apple Audience. Used to verify the audience on JWT tokens provided by Apple. Needed for idtoken validation
**--apple-client-id**="": Apple OAuth client ID
**--apple-enabled**: Enable Apple OAuth provider
**--apple-key-id**="": Apple OAuth key ID
**--apple-private-key**="": Apple OAuth private key
**--apple-scope**="": Apple OAuth scope (default: [name email])
**--apple-team-id**="": Apple OAuth team ID
**--azuread-client-id**="": AzureAD OAuth client ID
**--azuread-client-secret**="": Azuread OAuth client secret
**--azuread-enabled**: Enable Azuread OAuth provider
**--azuread-scope**="": Azuread OAuth scope (default: [email profile openid offline_access])
**--azuread-tenant**="": Azuread Tenant (default: common)
**--bitbucket-client-id**="": Bitbucket OAuth client ID
**--bitbucket-client-secret**="": Bitbucket OAuth client secret
**--bitbucket-enabled**: Enable Bitbucket OAuth provider
**--bitbucket-scope**="": Bitbucket OAuth scope (default: [account])
**--block-email-domains**="": Comma-separated list of email domains that cannot register (default: [])
**--block-emails**="": Comma-separated list of email domains that cannot register (default: [])
**--client-url**="": URL of your frontend application. Used to redirect users to the right page once actions based on emails or OAuth succeed
**--conceal-errors**: Conceal sensitive error messages to avoid leaking information about user accounts to attackers
**--custom-claims**="": Custom claims
**--custom-claims-defaults**="": Custom claims defaults
**--debug**: enable debug logging
**--default-allowed-roles**="": Comma-separated list of default allowed user roles (default: [me])
**--default-locale**="": Default locale (default: en)
**--default-role**="": Default user role for registered users (default: user)
**--disable-new-users**: If set, new users will be disabled after finishing registration and won't be able to sign in
**--disable-signup**: If set to true, all signup methods will throw an unauthorized error
**--discord-client-id**="": Discord OAuth client ID
**--discord-client-secret**="": Discord OAuth client secret
**--discord-enabled**: Enable Discord OAuth provider
**--discord-scope**="": Discord OAuth scope (default: [identify email])
**--email-passwordless-enabled**: Enables passwordless authentication by email. SMTP must be configured
**--email-verification-required**: Require email to be verified for email signin
**--enable-anonymous-users**: Enable anonymous users
**--enable-change-env**: Enable change env. Do not do this in production!
**--entraid-client-id**="": EntraID OAuth client ID
**--entraid-client-secret**="": EntraID OAuth client secret
**--entraid-enabled**: Enable EntraID OAuth provider
**--entraid-scope**="": EntraID OAuth scope (default: [email profile openid offline_access])
**--entraid-tenant**="": EntraID Tenant (default: common)
**--facebook-client-id**="": Facebook OAuth client ID
**--facebook-client-secret**="": Facebook OAuth client secret
**--facebook-enabled**: Enable Facebook OAuth provider
**--facebook-scope**="": Facebook OAuth scope (default: [email])
**--github-authorization-url**="": GitHub OAuth authorization URL (default: https://github.com/login/oauth/authorize)
**--github-client-id**="": GitHub OAuth client ID
**--github-client-secret**="": GitHub OAuth client secret
**--github-enabled**: Enable GitHub OAuth provider
**--github-scope**="": GitHub OAuth scope (default: [user:email])
**--github-token-url**="": GitHub OAuth token URL (default: https://github.com/login/oauth/access_token)
**--github-user-profile-url**="": GitHub OAuth user profile URL (default: https://api.github.com/user)
**--gitlab-client-id**="": Gitlab OAuth client ID
**--gitlab-client-secret**="": Gitlab OAuth client secret
**--gitlab-enabled**: Enable Gitlab OAuth provider
**--gitlab-scope**="": Gitlab OAuth scope (default: [read_user])
**--google-audience**="": Google Audience. Used to verify the audience on JWT tokens provided by Google. Needed for idtoken validation
**--google-client-id**="": Google OAuth client ID
**--google-client-secret**="": Google OAuth client secret
**--google-enabled**: Enable Google OAuth provider
**--google-scope**="": Google OAuth scope (default: [openid email profile])
**--graphql-url**="": Hasura GraphQL endpoint. Required for custom claims
**--gravatar-default**="": Gravatar default (default: blank)
**--gravatar-enabled**: Enable gravatar
**--gravatar-rating**="": Gravatar rating (default: g)
**--hasura-admin-secret**="": Hasura admin secret. Required for custom claims
**--hasura-graphql-jwt-secret**="": Key used for generating JWTs. Must be `HMAC-SHA`-based and the same as configured in Hasura. More info: https://hasura.io/docs/latest/graphql/core/auth/authentication/jwt.html#running-with-jwt
**--help, -h**: show help
**--linkedin-client-id**="": LinkedIn OAuth client ID
**--linkedin-client-secret**="": LinkedIn OAuth client secret
**--linkedin-enabled**: Enable LinkedIn OAuth provider
**--linkedin-scope**="": LinkedIn OAuth scope (default: [openid profile email])
**--log-format-text**: format logs in plain text
**--mfa-enabled**: Enable MFA
**--mfa-totp-issuer**="": Issuer for MFA TOTP (default: auth)
**--otp-email-enabled**: Enable OTP via email
**--password-hibp-enabled**: Check user's password against Pwned Passwords https://haveibeenpwned.com/Passwords
**--password-min-length**="": Minimum password length (default: 3)
**--port**="": Port to bind to (default: 4000)
**--postgres**="": PostgreSQL connection URI: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING (default: postgres://postgres:postgres@localhost:5432/local?sslmode=disable)
**--postgres-migrations**="": PostgreSQL connection URI for running migrations: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING. Required to inject the `auth` schema into the database. If not specied, the `postgres connection will be used
**--rate-limit-brute-force-burst**="": Brute force rate limit burst (default: 10)
**--rate-limit-brute-force-interval**="": Brute force rate limit interval (default: 5m0s)
**--rate-limit-email-burst**="": Email rate limit burst (default: 10)
**--rate-limit-email-interval**="": Email rate limit interval (default: 1h0m0s)
**--rate-limit-email-is-global**: Email rate limit is global instead of per user
**--rate-limit-enable**: Enable rate limiting
**--rate-limit-global-burst**="": Global rate limit burst (default: 100)
**--rate-limit-global-interval**="": Global rate limit interval (default: 1m0s)
**--rate-limit-memcache-prefix**="": Prefix for rate limit keys in memcache
**--rate-limit-memcache-server**="": Store sliding window rate limit data in memcache
**--rate-limit-signups-burst**="": Signups rate limit burst (default: 10)
**--rate-limit-signups-interval**="": Signups rate limit interval (default: 5m0s)
**--rate-limit-sms-burst**="": SMS rate limit burst (default: 10)
**--rate-limit-sms-interval**="": SMS rate limit interval (default: 1h0m0s)
**--refresh-token-expires-in**="": Refresh token expires in (seconds) (default: 2592000)
**--require-elevated-claim**="": Require x-hasura-auth-elevated claim to perform certain actions: create PATs, change email and/or password, enable/disable MFA and add security keys. If set to `recommended` the claim check is only performed if the user has a security key attached. If set to `required` the only action that won't require the claim is setting a security key for the first time. (default: disabled)
**--server-url**="": Server URL of where Auth service is running. This value is to used as a callback in email templates and for the OAuth authentication process
**--sms-modica-password**="": Modica password for SMS
**--sms-modica-username**="": Modica username for SMS
**--sms-passwordless-enabled**: Enable SMS passwordless authentication
**--sms-provider**="": SMS provider (twilio or modica) (default: twilio)
**--sms-twilio-account-sid**="": Twilio Account SID for SMS
**--sms-twilio-auth-token**="": Twilio Auth Token for SMS
**--sms-twilio-messaging-service-id**="": Twilio Messaging Service ID for SMS
**--smtp-api-header**="": SMTP API Header. Maps to header X-SMTPAPI
**--smtp-auth-method**="": SMTP Authentication method (default: PLAIN)
**--smtp-host**="": SMTP Host. If the host is 'postmark' then the Postmark API will be used. Use AUTH_SMTP_PASS as the server token, other SMTP options are ignored
**--smtp-password**="": SMTP password
**--smtp-port**="": SMTP port (default: 587)
**--smtp-secure**: Connect over TLS. Deprecated: It is recommended to use port 587 with STARTTLS instead of this option.
**--smtp-sender**="": SMTP sender
**--smtp-user**="": SMTP user
**--spotify-client-id**="": Spotify OAuth client ID
**--spotify-client-secret**="": Spotify OAuth client secret
**--spotify-enabled**: Enable Spotify OAuth provider
**--spotify-scope**="": Spotify OAuth scope (default: [user-read-email user-read-private])
**--strava-client-id**="": Strava OAuth client ID
**--strava-client-secret**="": Strava OAuth client secret
**--strava-enabled**: Enable Strava OAuth provider
**--strava-scope**="": Strava OAuth scope (default: [profile:read_all])
**--templates-path**="": Path to the email templates. Default to included ones if path isn't found (default: /app/email-templates)
**--turnstile-secret**="": Turnstile secret. If passed, enable Cloudflare's turnstile for signup methods. The header `X-Cf-Turnstile-Response ` will have to be included in the request for verification
**--twitch-client-id**="": Twitch OAuth client ID
**--twitch-client-secret**="": Twitch OAuth client secret
**--twitch-enabled**: Enable Twitch OAuth provider
**--twitch-scope**="": Twitch OAuth scope (default: [user:read:email])
**--twitter-consumer-key**="": Twitter OAuth consumer key
**--twitter-consumer-secret**="": Twitter OAuth consumer secret
**--twitter-enabled**: Enable Twitter OAuth provider
**--webauthn-attestation-timeout**="": Timeout for the attestation process in milliseconds (default: 60000)
**--webauthn-enabled**: When enabled, passwordless Webauthn authentication can be done via device supported strong authenticators like fingerprint, Face ID, etc.
**--webauthn-rp-id**="": Relying party id. If not set `AUTH_CLIENT_URL` will be used as a default
**--webauthn-rp-name**="": Relying party name. Friendly name visual to the user informing who requires the authentication. Probably your app's name
**--webauthn-rp-origins**="": Array of URLs where the registration is permitted and should have occurred on. `AUTH_CLIENT_URL` will be automatically added to the list of origins if is set (default: [])
**--windowslive-client-id**="": Windowslive OAuth client ID
**--windowslive-client-secret**="": Windows Live OAuth client secret
**--windowslive-enabled**: Enable Windowslive OAuth provider
**--windowslive-scope**="": Windows Live OAuth scope (default: [wl.basic wl.emails])
**--workos-client-id**="": WorkOS OAuth client ID
**--workos-client-secret**="": WorkOS OAuth client secret
**--workos-default-connection**="": WorkOS OAuth default connection
**--workos-default-domain**="": WorkOS OAuth default domain
**--workos-default-organization**="": WorkOS OAuth default organization
**--workos-enabled**: Enable WorkOS OAuth provider
# COMMANDS
## docs
Generate markdown documentation for the CLI
**--help, -h**: show help
**--output**="": Output file (default: stdout)
### help, h
Shows a list of commands or help for one command
## help, h
Shows a list of commands or help for one command

View File

@@ -1,137 +0,0 @@
# Configuration Guide
## Email configuration
Hasura Auth automatically sends transactional emails to manage the following operations:
- Sign up
- Password reset
- Email change
- Passwordless with emails
### SMTP settings
```bash
AUTH_SMTP_HOST=smtp.example.com
AUTH_SMTP_PORT=1025
AUTH_SMTP_USER=user
AUTH_SMTP_PASS=password
AUTH_SMTP_SENDER=auth@example.com
```
See the [CLI documentation](./cli.md) for all available configuration options including SMTP settings.
### Email templates
You can create your own templates to customize the emails that will be sent to the users. You can have a look at the [official email templates](https://github.com/nhost/nhost/services/auth/tree/main/email-templates) to understand how they are structured.
#### Within Docker
When using Docker, you can mount your own email templates from the local file system. You can have a look at this [docker-compose example](https://github.com/nhost/nhost/services/auth/blob/16df3e84b6c9a4f888b2ff07bd85afc34f8ed051/docker-compose-example.yaml#L41) to see how to set it up.
---
## Redirections
Some authentication operations redirects the users to the frontend application:
- After an OAuth provider completes or fails authentication, the user is redirected to the frontend
- Every email sent to the user (passwordless with email, password/email change, password reset) contains a link, that redirects the user to the frontend
In order to achieve that, you need to set the `AUTH_CLIENT_URL` environment variable, for instance:
```bash
AUTH_CLIENT_URL=https://my-app.vercel.com
```
---
## Email + password authentication
### Email checks
You can specify a list of allowed emails or domains with `AUTH_ACCESS_CONTROL_ALLOWED_EMAILS` and `AUTH_ACCESS_CONTROL_ALLOWED_EMAIL_DOMAINS`.
As an example, the following environment variables will only allow `@nhost.io`, `@example.com` and `bob@smith.com` to register to the application:
```bash
AUTH_ACCESS_CONTROL_ALLOWED_EMAILS=bob@smith.com
AUTH_ACCESS_CONTROL_ALLOWED_EMAIL_DOMAINS=nhost.io,example.com
```
In the above example, users with the following emails would be able to register `bob@smith.com`, `emma@example.com`, `john@nhost.io`, whereas `mary@firebase.com` won't.
Similarly, it is possible to provide a list of forbidden emails or domains with `AUTH_ACCESS_CONTROL_BLOCKED_EMAILS` and `AUTH_ACCESS_CONTROL_BLOCKED_EMAIL_DOMAINS`.
### Password checks
Hasura auth does not accepts passwords with less than three characters. This limit can be changed in changing the `AUTH_PASSWORD_MIN_LENGTH` environment variable.
It is also possible to only allow [passwords that have not been pwned](https://haveibeenpwned.com/) in setting `AUTH_PASSWORD_HIBP_ENABLED` to `true`.
### Time-based one-time password (TOTP) Multi-Factor authentication
It is possible to add a step to authentication with email and password authentication. In order for users to be able to activate MFA TOTP, `AUTH_MFA_ENABLED` must be set to `true`.
<!-- TODO ## OAuth authentication -->
---
## Paswordless
### Passwordless with emails (magic links)
Hasura Auth supports email [passwordless authentication](https://en.wikipedia.org/wiki/Passwordless_authentication). It requires [SMTP](#email-configuration) to be configured properly.
Set `AUTH_EMAIL_PASSWORDLESS_ENABLED` to `true` to enable passwordless authentication.
### Passwordless with SMS
Hasura Auth supports SMS [passwordless authentication](https://en.wikipedia.org/wiki/Passwordless_authentication). It requires an SMS provider to be configured properly.
Set `AUTH_SMS_PASSWORDLESS_ENABLED` to `true` to enable SMS passwordless authentication.
#### SMS Provider Configuration
Configure the SMS provider using the `AUTH_SMS_PROVIDER` environment variable:
```bash
AUTH_SMS_PROVIDER=twilio # or modica
```
**Twilio Configuration:**
```bash
AUTH_SMS_TWILIO_ACCOUNT_SID=your_account_sid
AUTH_SMS_TWILIO_AUTH_TOKEN=your_auth_token
AUTH_SMS_TWILIO_MESSAGING_SERVICE_ID=your_messaging_service_id
```
**Modica Group Configuration:**
```bash
AUTH_SMS_MODICA_USERNAME=your_username
AUTH_SMS_MODICA_PASSWORD=your_password
```
### FIDO2 Webauthn
Hasura Auth supports [Webauthn authentication](https://en.wikipedia.org/wiki/WebAuthn). Users can sign up and sign in using different strong authenticators like Face ID, Touch ID, Fingerprint, Hello Windows etc. using supported devices. **Passkeys are supported for cross-device sign in.**
**Each user can sign up only once using webauthn. Existing users can add subsequent webauthn authenticators (new device or browser) via `/user/webauthn/add`, which requires Bearer authentication token.**
Enabling and configuring of the Webauthn can be done by setting these env variables:
```bash
AUTH_SERVER_URL=https://nhost-auth.com
AUTH_WEBAUTHN_ENABLED=true
AUTH_WEBAUTHN_RP_NAME='My App'
AUTH_WEBAUTHN_RP_ORIGINS=https://my-app.vercel.com
```
By default if `AUTH_CLIENT_URL` is set, will be whitelisted as allowed origin for such authentication. Additional urls can be specified using `AUTH_WEBAUTHN_RP_ORIGINS`.
---
## Gravatar
Hasura Auth stores the avatar URL of users in `auth.users.avatar_url`. By default, it will look for the Gravatar linked to the email, and store it into this field.
It is possible to deactivate the use of Gravatar in setting the `AUTH_GRAVATAR_ENABLED` environment variable to `false`.

View File

@@ -1,116 +0,0 @@
# Custom Hasura JWT claims
Hasura comes with a [powerful authorisation system](https://hasura.io/docs/latest/graphql/core/auth/authorization/index.html). Hasura Auth is already configured to add `x-hasura-user-id`, `x-hasura-allowed-roles`, and `x-hasura-user-isAnonymous` to the JSON Web Tokens it generates.
In Hasura Auth, it is possible to define custom claims to add to the JWT, so they can be used by Hasura to determine the permissions of the received GraphQL operation.
Each custom claim is defined by a pair of a key and a value:
- The key determines the name of the claim, prefixed by `x-hasura`. For instance, `organisation-id` will become `x-hasura-organisation-id`.
- The value is a representation of the path to look at to determine the value of the claim. For instance `profile.organisation.id` will look for the `user.profile` Hasura relationship, and the `profile.organisation` Hasura relationship. Array values are transformed into Postgres syntax so Hasura can interpret them. See the official Hasura documentation to understand the [session variables format](https://hasura.io/docs/latest/graphql/core/auth/authorization/roles-variables.html#format-of-session-variables).
```bash
AUTH_JWT_CUSTOM_CLAIMS={"organisation-id":"profile.organisation[].id", "project-ids":"profile.contributesTo[].project.id"}
```
Will automatically generate and fetch the following GraphQL query:
```graphql
{
user(id: "<user-id>") {
profile {
organisation {
id
}
contributesTo {
project {
id
}
}
}
}
}
```
Please note that the strings you pass as values in your custom claims will be evaluated starting from the user object itself, hence they need to be a valid path inside it **without** the `user` part; so, for example is your user object has the following shape:
```js
user:{
profile:{
organizations:[
{
name:"org1"
},
{
name:"org2
}
]
}
}
```
This will not work:
```
// ❌ WRONG, the path `user.profile.organisation[].id` will not work
AUTH_JWT_CUSTOM_CLAIMS={"organisation-id":"user.profile.organisation[].id"}
```
This will
```
// ✅ CORRECT, the path `profile.organisation[].id` will work
AUTH_JWT_CUSTOM_CLAIMS={"organisation-id":"profile.organisation[].id"}
```
It will then use the same expressions e.g. `profile.contributesTo[].project.id` to evaluate the result with [JSONata](https://jsonata.org/), and possibly transform arrays into Hasura-readable, PostgreSQL arrays.Finally, it adds the custom claims to the JWT in the `https://hasura.io/jwt/claims` namespace:
```json
{
"https://hasura.io/jwt/claims": {
"x-hasura-organisation-id": "8bdc4f57-7d64-4146-a663-6bcb05ea2ac1",
"x-hasura-project-ids": "{\"3af1b33f-fd0f-425e-92e2-0db09c8b2e29\",\"979cb94c-d873-4d5b-8ee0-74527428f58f\"}",
"x-hasura-allowed-roles": [ "me", "user" ],
"x-hasura-default-role": "user",
"x-hasura-user-id": "121bbea4-908e-4540-ac5d-52c7f6f93bec",
"x-hasura-user-isAnonymous": "false"
}
"sub": "f8776768-4bbd-46f8-bae1-3c40da4a89ff",
"iss": "hasura-auth",
"iat": 1643040189,
"exp": 1643041089
}
```
## Limitations on JSON columns
JSON columns are currently a limitation of custom claims.
For instance, if your define a claim with the path `user.profile.json_column.my_field`, it will generate under the hood the following query:
```graphql
{
user(id: "user-uuid") {
profile {
json_column {
my_field
}
}
}
}
```
This is incorrect as Hasura does not support browsing into JSON columns (because they are not typed with a schema). Hasura only expects the following query:
```graphql
{
user(id: "user-uuid") {
profile {
json_column
}
}
}
```
The detection of JSON columns requires a lot more efforts as we would need to build the GraphQL query not only from the JMESPath/JSONata expression, but also from the GraphQL schema.
We however hard-coded a check on the `users.metadata` JSON column, hence a claim using the path `user.metadata.my_field` will work.

View File

@@ -1,34 +0,0 @@
# Extending user schema
Adding columns to the user tables may be tempting. However, all the tables and columns have a specific purpose, and changing the structure of the `auth` schema will very likely end in breaking the functionning of Hasura Auth. It's, therefore, **highly recommended** not to modify the database schema for any tables in the `auth` schema.
Instead, we recommend adding extra user information in the following ways:
- to store information in the `auth.users.metadata` column
- to store information in a separate table located in the `public` PostgreSQL schema, and to point to `auth.users.id` through a foreign key.
## `metadata` user field
The `auth.users.metadata` field is a JSON column, that can be used as an option on registration:
```json
{
"email": "bob@bob.com",
"passord": "12345678",
"options": {
"metadata": {
"first_name": "Bob"
}
}
}
```
## Additional user information in the `public` schema
As previously explained, the alteration of the `auth` schema may seriously hamper the functionning of Hasura Auth. The `metadata` field in the `auth.users` table may tackle some use cases, but in some other cases, we want to keep a certain level of structure in the way data is structured.
In that case, it is possible to create a dedicated table in the `public` schema, with a `user_id` foreign key column that would point to the `auth.users.id` column. It is then possible to add an Hasura object relationship that would join the two tables together.
<!-- TODO hooks on the metadata field -->
---

View File

@@ -1,97 +0,0 @@
# Database Schema
Hasura Auth stores all its data in a dedicated `auth` PostgreSQL schema. When Hasura Auth starts, it checks if the `auth` schema exists, then automatically syncs the following tables and their corresponding Hasura metadata:
```mermaid
erDiagram
migrations {
integer id PK
varchar name
varchar hash
timestamp executed_at "CURRENT_TIMESTAMP"
}
users ||--o{ user_roles : roles
user_roles }o--|| roles: role
users }o--|| roles: role
users ||--o{ refresh_tokens: refreshTokens
users ||--o{ user_security_keys: security_key
users ||--o{ user_providers: provider
providers ||--o{ user_providers: user
provider_requests {
uuid id PK "gen_random_uuid()"
test redirect_url
}
refresh_tokens {
uuid refresh_token PK
uuid user_id FK
timestamptz created_at "now()"
timestamptz expires_at
}
providers {
text id PK
}
user_providers {
uuid id PK "gen_random_uuid()"
timestamptz created_at "now()"
timestamptz updated_at "now()"
uuid user_id FK
text access_token
text refresh_token
text provider_id FK
text provider_user_id
}
user_security_keys {
uuid id PK "gen_random_uuid()"
uuid user_id FK
text credential_id
bytea credential_public_key
bigint counter "0"
text transports "''"
text nickname
}
user_roles {
uuid id PK "gen_random_uuid()"
timestamptz created_at "now()"
uuid user_id FK
text role FK
}
users {
uuid id PK "gen_random_uuid()"
timestamptz created_at "now()"
timestamptz updated_at "now()"
timestamptz last_seen "nullable"
boolean disabled "false"
text display_name "''"
text avatar_url "''"
varchar locale
email email "nullable"
text phone_number "nullable"
text password_hash "nullable"
boolean email_verified "false"
boolean phone_number_verified "false"
email new_email "nullable"
text otp_method_last_used "nullable"
text otp_hash "nullable"
timestamptz opt_hash_expires_at "now()"
text default_role FK "user"
boolean is_anonymous "false"
text totp_secret "nullable"
text active_mfa_type "nullable"
text ticket "nullable"
timestamptz ticket_expires_at "now()"
jsonb metadata "nullable"
text webauthn_current_challenge
}
roles {
text roles PK
}
```

View File

@@ -1,6 +0,0 @@
For the sake of readability, some elements are not present in the sequence diagrams:
- Detailed tasks
- Handling of errors
- Payload validation
- The notation of HTTP redirections is simplified: instead of going from the server to the client, then from the client to the redirection, they are presented as going from the server to the redirection

View File

@@ -57,7 +57,6 @@ func markdownDocs() *cli.Command {
//go:generate oapi-codegen -config go/api/server.cfg.yaml docs/openapi.yaml
//go:generate oapi-codegen -config go/api/types.cfg.yaml docs/openapi.yaml
//go:generate go run main.go docs --output docs/cli.md
func main() {
serveCmd := cmd.CommandServe()
app := &cli.Command{ //nolint:exhaustruct

View File

@@ -0,0 +1,95 @@
# Developer Guide
## Requirements
We use nix to manage the development environment, the build process and for running tests.
### With Nix (Recommended)
Run `nix develop \#storage` to get a complete development environment.
### Without Nix
Check `project.nix` (checkDeps, buildInputs, buildNativeInputs) for manual dependency installation. Alternatively, you can run `make nixops-container-env` in the root of the repository to enter a Docker container with nix and all dependencies pre-installed (note it is a large image).
## Development Workflow
### Running Tests
**With Nix:**
```bash
make dev-env-up
make check
```
**Without Nix:**
```bash
# Start development environment
make dev-env-up
# Lint OpenAPI spec
vacuum lint \
-dqb -n info \
--ruleset vacuum.yaml \
--ignore-file vacuum-ignore.yaml \
controller/openapi.yaml
# Generate code
go generate ./...
# Lint Go code
golangci-lint run ./...
# Run tests
go test -v ./...
```
### Formatting
Format code before committing:
```bash
golines -w --base-formatter=gofumpt .
```
## Building
### Local Build
Build the project (output in `./result`):
```bash
make build
```
### Docker Images
Build and import Docker images with skopeo:
```bash
make build-docker-image # Storage service
```
If you run the command above inside the dockerized nixops-container-env and you get an error like:
```
FATA[0000] writing blob: io: read/write on closed pipe
```
then you need to run the following command outside of the container (needs skopeo installed on the host):
```bash
cd cli
make build-docker-image-import-bare
```
## Special Notes
### Image Processing
This service uses **libvips** for image processing, which requires:
- Native dependencies: clang, pkg-config
- System libraries: libjpeg, libpng, libwebp, openjpeg, libheif, pango, etc.
These are automatically configured in the Nix environment. For manual setup, ensure libvips and its dependencies are properly installed.
### ClamAV Integration
The storage service integrates with ClamAV for virus scanning. A separate ClamAV Docker image is built and used in development environments.

View File

@@ -73,23 +73,7 @@ sequenceDiagram
This feature can be enabled with the flag `--clamav-server string`, where `string` is the tcp address for the clamd service.
## OpenAPI
## Documentation
The service comes with an [OpenAPI definition](/controller/openapi.yaml) which you can also see [online](https://editor.swagger.io/?url=https://raw.githubusercontent.com/nhost/Storage/main/controller/openapi.yaml).
## Using the service
Easiest way to get started is by using [nhost](https://nhost.io)'s free tier but if you want to self-host you can easily do it yourself as well.
### Self-hosting the service
Requirements:
1. [hasura](https://hasura.io) running, which in turns needs [postgres or any other supported database](https://hasura.io/docs/latest/graphql/core/databases/index/#supported-databases).
2. An s3-compatible service. For instance, [AWS S3](https://aws.amazon.com/s3/), [minio](https://min.io), etc...
A fully working example using docker-compose can be found [here](/build/dev/docker/). Just remember to replace the image `Storage:dev` with a valid [docker image](https://hub.docker.com/r/nhost/storage/tags), for instance, `nhost/storage:0.1.5`.
## Contributing
If you need help or want to contribute it is recommended to read the [contributing](/CONTRIBUTING.md) information first. In addition, if you plan to contribute with code it is also encouraged to read the [development](/DEVELOPMENT.md) guide.
- [Official Documentation](https://docs.nhost.io/products/storage/overview).
- [OpenAPI schema](https://docs.nhost.io/reference/storage/post-files)