Skip to content
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ to make the process as smooth as possible.
Before submitting a pull request, please review this document, which outlines what
conventions to follow when submitting changes. If you have any questions not covered
in this document, please reach out to us in the [ParadeDB Community Slack](https://join.slack.com/t/paradedbcommunity/shared_invite/zt-32abtyjg4-yoYoi~RPh9MSW8tDbl0BQw)
or via [email](support@paradedb.com).
or via [email](mailto:support@paradedb.com).

## Development Workflow

Expand Down
10 changes: 0 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,6 @@ The chart is also available on [Artifact Hub](https://artifacthub.io/packages/he

## Usage

### ParadeDB Bring-Your-Own-Cloud (BYOC)

The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart.

ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS and GCP CloudSQL, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster.

You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com).

### Self-Hosted

First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.29+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/).

#### Monitoring
Expand Down
10 changes: 0 additions & 10 deletions charts/paradedb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,6 @@ The chart is also available on [Artifact Hub](https://artifacthub.io/packages/he

## Usage

### ParadeDB Bring-Your-Own-Cloud (BYOC)

The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart.

ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS and GCP CloudSQL, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster.

You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com).

### Self-Hosted

First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.29+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/).

#### Monitoring
Expand Down
16 changes: 3 additions & 13 deletions charts/paradedb/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,6 @@ The chart is also available on [Artifact Hub](https://artifacthub.io/packages/he

## Usage

### ParadeDB Bring-Your-Own-Cloud (BYOC)

The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart.

ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS and GCP CloudSQL, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster.

You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com).

### Self-Hosted

First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.29+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/).

#### Monitoring
Expand Down Expand Up @@ -151,9 +141,9 @@ backups:
backupOwnerReference: self
```

Each backup adapter takes its own set of parameters, listed in the [Configuration options](#Configuration-options) section
below. Refer to the table for the full list of parameters and place the configuration under the appropriate key: `backup.s3`,
`backup.azure`, or `backup.google`.
Each backup adapter takes its own set of parameters, listed in the [Configuration options](#Configuration-options) section.
Refer to the table for the full list of parameters and place the configuration under the appropriate key: `backups.s3`,
`backups.azure`, or `backups.google`.

## Recovery

Expand Down
6 changes: 3 additions & 3 deletions charts/paradedb/docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,9 @@ Additionally you can specify the following parameters:
backupOwnerReference: self
```

Each backup adapter takes its own set of parameters, listed in the [Configuration options](../README.md#Configuration-options) section
below. Refer to the table for the full list of parameters and place the configuration under the appropriate key: `backup.s3`,
`backup.azure`, or `backup.google`.
Each backup adapter takes its own set of parameters, listed in the [Configuration options](../README.md#Configuration-options)
section. Refer to the table for the full list of parameters and place the configuration under the appropriate key: `backups.s3`,
`backups.azure`, or `backups.google`.

### Cluster configuration

Expand Down
5 changes: 1 addition & 4 deletions charts/paradedb/docs/runbooks/CNPGClusterHACritical.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The `CNPGClusterHACritical` alert is triggered when the CloudNativePG cluster ha

This alert may occur during a regular failover or a planned automated version upgrade on two-instance clusters, as there is a brief period when only the primary remains active while a failover completes.

On single-instance clusters this alert will remain active at all times. If running with a single instance is intentional, consider silencing the alert.
On single-instance clusters, this alert will remain active at all times. If running with a single instance is intentional, consider silencing the alert.

## Impact

Expand Down Expand Up @@ -62,9 +62,6 @@ First, consult the [CloudNativePG Failure Modes](https://cloudnative-pg.io/docum

### Insufficient Storage

> [!NOTE]
> If using the ParadeDB BYOC module, refer to `docs/handbook/NotEnoughDiskSpace.md` included with the Terraform module.

If the above diagnosis commands indicate that an instance’s storage or WAL disk is full, increase the cluster storage size. Refer to the CloudNativePG documentation for more information on how to [Resize the CloudNativePG Cluster Storage](https://cloudnative-pg.io/documentation/current/troubleshooting/#storage-is-full).

### Unknown
Expand Down
3 changes: 0 additions & 3 deletions charts/paradedb/docs/runbooks/CNPGClusterHAWarning.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,6 @@ First, consult the [CloudNativePG Failure Modes](https://cloudnative-pg.io/docum

### Insufficient Storage

> [!NOTE]
> If you are using ParadeDB BYOC, refer to `docs/handbook/NotEnoughDiskSpace.md` included with the Terraform module.

If the above diagnosis commands indicate that an instance’s storage or WAL disk is full, increase the cluster storage size. Refer to the CloudNativePG documentation for more information on how to [Resize the CloudNativePG Cluster Storage](https://cloudnative-pg.io/documentation/current/troubleshooting/#storage-is-full).

### Unknown
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,8 @@ kubectl get cluster paradedb -o 'jsonpath={"Current Primary: "}{.status.currentP
> [!IMPORTANT]
> Changing the `max_connections` parameter requires a restart of the CloudNativePG cluster instances. This will cause a restart of a standby instance and a switchover of the primary instance, causing a brief service disruption.

- Increase the maximum number of connections by setting the max_connections PostgreSQL parameter:
- Increase the maximum number of connections by setting the `max_connections` PostgreSQL parameter:
- Helm: `cluster.postgresql.parameters.max_connections`
- ParadeDB BYOC Terraform: `paradedb.postgresql.parameters.max_connections`

- Use connection pooling by enabling PgBouncer to reduce the number of connections to the database. PgBouncer itself requires connections, so temporarily increase `max_connections` while enabling it to avoid service disruption.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,8 @@ kubectl get cluster paradedb -o 'jsonpath={"Current Primary: "}{.status.currentP
> [!IMPORTANT]
> Changing the `max_connections` parameter requires a restart of the CloudNativePG cluster instances. This will cause a restart of a standby instance and a switchover of the primary instance, causing a brief service disruption.

- Increase the maximum number of connections by setting the max_connections PostgreSQL parameter:
- Increase the maximum number of connections by setting the `max_connections` PostgreSQL parameter:
- Helm: `cluster.postgresql.parameters.max_connections`
- ParadeDB BYOC Terraform: `paradedb.postgresql.parameters.max_connections`

- Use connection pooling by enabling PgBouncer to reduce the number of connections to the database. PgBouncer itself requires connections, so temporarily increase `max_connections` while enabling it to avoid service disruption.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Inspect the disk IO statistics using the [CloudNativePG Grafana Dashboard](https

Inspect the `Stat Activity` section of the [CloudNativePG Grafana Dashboard](https://grafana.com/grafana/dashboards/20417-cloudnativepg/).

- Suboptimal PostgreSQL configuration, e.g. too `few max_wal_senders`. Set this to at least the number of cluster instances (default 10 is usually sufficient).
- Suboptimal PostgreSQL configuration, e.g. too few `max_wal_senders`. Set this to at least the number of cluster instances (default 10 is usually sufficient).

Inspect the `PostgreSQL Parameters` section of the [CloudNativePG Grafana Dashboard](https://grafana.com/grafana/dashboards/20417-cloudnativepg/).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,6 @@ Check disk usage metrics in the [CloudNativePG Grafana Dashboard](https://grafan

## Mitigation

> [!NOTE]
> If using the ParadeDB BYOC Terraform module, refer to the `docs/handbook/NotEnoughDiskSpace.md` handbook for instructions on increasing disk space. This requires a switchover of the ParadeDB primary, causing a brief service disruption.

If the WAL (Write-Ahead Logging) volume is filling and you have continuous archiving enabled, verify that WAL archiving is functioning correctly. A buildup of WAL files in `pg_wal` indicates an issue. Monitor the `cnpg_collector_pg_wal_archive_status` metric and ensure the number of `ready` files is not steadily increasing.

For more details, see the [CloudNativePG documentation on resizing storage](https://cloudnative-pg.io/documentation/current/troubleshooting/#storage-is-full).
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Description

The `CNPGClusterLowDiskSpaceWarning` alert is triggered when disk usage on any CloudNativePG cluster volume exceeds 70%. It may occur on the following volumes:
The `CNPGClusterLowDiskSpaceWarning` alert is triggered when disk usage on any CloudNativePG cluster volume exceeds 80%. It may occur on the following volumes:

- The PVC hosting `PGDATA` (`storage` section)
- The PVC hosting WAL files (`walStorage` section)
Expand All @@ -20,9 +20,6 @@ Check disk usage metrics in the [CloudNativePG Grafana Dashboard](https://grafan

## Mitigation

> [!NOTE]
> If using the ParadeDB BYOC Terraform module, refer to the `docs/handbook/NotEnoughDiskSpace.md` handbook for instructions on increasing disk space. This requires a switchover of the ParadeDB primary, causing a brief service disruption.

If the WAL (Write-Ahead Logging) volume is filling and you have continuous archiving enabled, verify that WAL archiving is functioning correctly. A buildup of WAL files in `pg_wal` indicates an issue. Monitor the `cnpg_collector_pg_wal_archive_status` metric and ensure the number of `ready` files is not steadily increasing.

For more details, see the [CloudNativePG documentation on resizing storage](https://cloudnative-pg.io/documentation/current/troubleshooting/#storage-is-full).
2 changes: 1 addition & 1 deletion charts/paradedb/examples/custom-queries.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ cluster:
metrics:
- datname:
usage: "LABEL"
description: "Name of the database database"
description: "Name of the database"
- ratio:
usage: GAUGE
description: "Cache hit ratio"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ spec:
primary: [status, currentPrimary]

- name: "primary_failing_since_time"
help: "The timestamp when the primary was detected to be unhealthy This field is reported when .spec.failoverDelay is populated or during online upgrades"
help: "The timestamp when the primary was detected to be unhealthy. This field is reported when .spec.failoverDelay is populated or during online upgrades"
each:
type: Gauge
gauge:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ annotations:
ParadeDB CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" is running extremely low on disk space. Check attached PVCs! Current disk space usage is {{ .value }}% of the total capacity.
runbook_url: https://github.com/paradedb/charts/blob/main/charts/paradedb/docs/runbooks/{{ $alert }}.md
expr: |
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) > 0.9 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) > 0.9 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) * 100 > 90 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) * 100 > 90 OR
max(sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
/
sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ annotations:
ParadeDB CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" is running low on disk space. Check attached PVCs. Current disk space usage is {{ .value }}% of the total capacity.
runbook_url: https://github.com/paradedb/charts/blob/main/charts/paradedb/docs/runbooks/{{ $alert }}.md
expr: |
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) > 0.7 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) > 0.7 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) * 100 > 80 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) * 100 > 80 OR
max(sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
/
sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
*
on(namespace, persistentvolumeclaim) group_left(volume)
kube_pod_spec_volumes_persistentvolumeclaims_info{pod=~"{{ .podSelector }}"}
) * 100 > 70
) * 100 > 80
for: 5m
labels:
severity: warning
Expand Down
1 change: 0 additions & 1 deletion charts/paradedb/templates/ca-bundle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ metadata:
namespace: {{ include "cluster.namespace" . }}
data:
{{ .Values.backups.endpointCA.key | default "ca-bundle.crt" | quote }}: {{ .Values.backups.endpointCA.value }}

{{- end }}
2 changes: 1 addition & 1 deletion charts/paradedb/templates/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ spec:
{{- with .Values.cluster.postgresql.ldap }}
ldap:
{{- toYaml . | nindent 6 }}
{{- end}}
{{- end }}
{{- with .Values.cluster.postgresql.synchronous }}
synchronous:
{{- toYaml . | nindent 6 }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ spec:
apply: 1s
assert: 10s
cleanup: 1m
exec: 3m
exec: 4m
steps:
- name: Install a cluster with a console enabled
try:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ spec:
- assert:
file: ./05-paradedb_extension_check-assert.yaml

- name: cleanup
- name: Cleanup
try:
- script:
content: |
Expand Down
Loading