Skip to content

Helm chart: persistentVolume breaks scaling more than one replica and RollingUpdate #33

@KrlLevchenko

Description

@KrlLevchenko

Problem with persistentVolume and multiple replicas in trickster Helm chart

Description

When using the trickster Helm chart with persistent storage enabled, it is not possible to safely use more than one replica or perform a RollingUpdate deployment.

I am inheriting from this chart and using the following values:

trickster:
  replicaCount: 2

  persistentVolume:
    enabled: true
    size: 90Gi
    storageClass: yc-network-ssd

With this configuration, the chart cannot work correctly with more than one replica.


Root cause

In charts/trickster/templates/deployment.yaml, persistent storage is implemented using a single PersistentVolumeClaim with a fixed name (derived from trickster.fullname) and mounted by all pods of the Deployment.

This means that all replicas share the same PVC.

In most cloud environments, persistent volumes are ReadWriteOnce, so the same volume cannot be attached to more than one pod at the same time.


Problems caused by this design

1. Scaling replicas > 1 does not work

When replicaCount > 1, the second pod cannot start because the persistent volume is already attached to the first pod.

2. RollingUpdate strategy is broken

The Deployment uses the default RollingUpdate strategy.

During a rollout or a restart, Kubernetes attempts to:

  1. Create a new pod
  2. Attach the existing PVC
  3. Terminate the old pod

This fails because the PVC is already in use by the running pod.
As a result, rollouts and restarts can get stuck indefinitely.


Why this is a chart design issue

Using a Deployment with a single shared PVC implicitly assumes ReadWriteMany storage, which is not available in most cloud providers.

Because:

  • The PVC name is fixed
  • All replicas reference the same volume

It is not possible to correctly run Trickster with persistent storage and multiple replicas using this chart.


Suggested improvements

One of the following approaches would make the chart safer and more predictable:

  • Use a StatefulSet with volumeClaimTemplates when persistentVolume.enabled=true
  • Enforce or clearly document replicaCount: 1 when persistence is enabled
  • Allow switching the update strategy to Recreate or otherwise explicitly handle PVC limitations

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions