Skip to content

feat: update the placement v1 with PR and CPRS#484

Open
ryanzhang-oss wants to merge 5 commits intokubefleet-dev:mainfrom
ryanzhang-oss:placement-v1
Open

feat: update the placement v1 with PR and CPRS#484
ryanzhang-oss wants to merge 5 commits intokubefleet-dev:mainfrom
ryanzhang-oss:placement-v1

Conversation

@ryanzhang-oss
Copy link
Member

Description of your changes

update the placement v1 with PR and CPRS

Fixes #

I have:

  • Run make reviewable to ensure this PR is ready for review.

How has this code been tested

Special notes for your reviewer

Signed-off-by: Ryan Zhang <yangzhangrice@hotmail.com>
Signed-off-by: Ryan Zhang <yangzhangrice@hotmail.com>
@codecov
Copy link

codecov bot commented Mar 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Member

@britaniar britaniar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update failed tests:

Image Image

// ClusterResourcePlacementSpec defines the desired state of ClusterResourcePlacement.
type ClusterResourcePlacementSpec struct {
// PlacementSpec defines the desired state of ClusterResourcePlacement.
type PlacementSpec struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a repo-wide naming convention?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is k8s api so we follow k8s naming convention

Signed-off-by: Ryan Zhang <yangzhangrice@hotmail.com>
Copy link
Member

@michaelawyu michaelawyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ;) I can submit additional API progression E2E test cases if needed.

weng271190436
weng271190436 previously approved these changes Mar 6, 2026
// The desired state of ClusterResourcePlacement.
// +required
Spec ClusterResourcePlacementSpec `json:"spec"`
Spec PlacementSpec `json:"spec"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some CEL to v1beta1 PlacementSpec and Policy

https://github.com/kubefleet-dev/kubefleet/pull/315/changes

Should we include those as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have yetkin add it in his PR.

@@ -885,15 +935,26 @@ type EnvelopeType string
const (
// ConfigMapEnvelopeType means the envelope object is of type `ConfigMap`.
ConfigMapEnvelopeType EnvelopeType = "ConfigMap"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking but is it time to drop this?

// TO-DO (chenyu1): drop this type after the configMap-based envelopes become obsolete.

@@ -1203,6 +1275,57 @@ const (
// - "Unknown" means we haven't finished the apply yet so that we cannot check the resource availability.
ResourcesAvailableConditionType ResourcePlacementConditionType = "Available"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this and the ones above should have been replaced by the ones below.

@ryanzhang-oss
Copy link
Member Author

Code review

Found 5 issues:

  1. ClusterResourcePlacementStatus type name reused for a new namespaced CRD — previously this was the status struct embedded in ClusterResourcePlacement (now renamed to PlacementStatus). Any external code that imported v1.ClusterResourcePlacementStatus as a status struct now silently refers to the new CRD object instead. The generated deepcopy confirms this: the new type embeds TypeMeta, ObjectMeta, PlacementStatus, and LastUpdatedTime — an entirely different shape.

https://github.com/kubefleet-dev/kubefleet/blob/2e3152b7dc4a085d0b372a1a9d9673868f6b0787/apis/placement/v1/clusterresourceplacement_types.go#L1578-L1598

  1. Contradictory +kubebuilder:validation:Required and json:"...,omitempty" on ClusterResourcePlacementStatus fields — both PlacementStatus (tagged sourceStatus,omitempty) and LastUpdatedTime (tagged lastUpdatedTime,omitempty) are marked Required, but omitempty causes the Go JSON serializer to drop zero-valued fields. The CRD schema will reject objects missing these fields while the serializer happily omits them — a serialization/validation mismatch that will cause runtime failures.

https://github.com/kubefleet-dev/kubefleet/blob/2e3152b7dc4a085d0b372a1a9d9673868f6b0787/apis/placement/v1/clusterresourceplacement_types.go#L1583-L1598

  1. Missing CEL immutability validations present in v1beta1 — v1beta1's ClusterResourcePlacement.Spec has three +kubebuilder:validation:XValidation rules absent from v1: (a) policy cannot be removed once set, (b) statusReportingScope is immutable, (c) placementType is immutable. Without these, users can freely mutate fields intended to be immutable, breaking controller invariants.

// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementWorkSynchronized")].observedGeneration`,name="Work-Synchronized-Gen",priority=1,type=string
// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementAvailable")].status`,name="Available",type=string
// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementAvailable")].observedGeneration`,name="Available-Gen",type=string
// +kubebuilder:printcolumn:JSONPath=`.metadata.creationTimestamp`,name="Age",type=date
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ResourcePlacement is used to select namespace scoped resources, including built-in resources and custom resources,
// and placement them onto selected member clusters in a fleet.
// `SchedulingPolicySnapshot` and `ResourceSnapshot` objects are created in the same namespace when there are changes in the
// system to keep the history of the changes affecting a `ResourcePlacement`. We will also create `ResourceBinding` objects in the same namespace.
type ResourcePlacement struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// The desired state of ResourcePlacement.
// +required
Spec PlacementSpec `json:"spec"`
// The observed status of ResourcePlacement.

  1. Missing CEL rule: exactly one namespace selector required when StatusReportingScope=NamespaceAccessible — v1beta1 enforces this with !(self.statusReportingScope == 'NamespaceAccessible' && size(self.resourceSelectors.filter(x, x.kind == 'Namespace')) != 1). Without it, users can set NamespaceAccessible while targeting zero or multiple namespaces, causing runtime failures when the controller tries to create the ClusterResourcePlacementStatus mirror object.

// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementWorkSynchronized")].observedGeneration`,name="Work-Synchronized-Gen",priority=1,type=string
// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementAvailable")].status`,name="Available",type=string
// +kubebuilder:printcolumn:JSONPath=`.status.conditions[?(@.type=="ResourcePlacementAvailable")].observedGeneration`,name="Available-Gen",type=string
// +kubebuilder:printcolumn:JSONPath=`.metadata.creationTimestamp`,name="Age",type=date
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ResourcePlacement is used to select namespace scoped resources, including built-in resources and custom resources,
// and placement them onto selected member clusters in a fleet.
// `SchedulingPolicySnapshot` and `ResourceSnapshot` objects are created in the same namespace when there are changes in the
// system to keep the history of the changes affecting a `ResourcePlacement`. We will also create `ResourceBinding` objects in the same namespace.
type ResourcePlacement struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// The desired state of ResourcePlacement.
// +required
Spec PlacementSpec `json:"spec"`
// The observed status of ResourcePlacement.

  1. ResourcePlacementList and ClusterResourcePlacementStatusList missing +kubebuilder:object:root=trueClusterResourcePlacementList in the same file has this marker; both new list types do not. Without it, controller-gen will not register these list types correctly, breaking CRD generation and scheme registration.

func (m *ResourcePlacement) SetPlacementSpec(spec PlacementSpec) {
spec.DeepCopyInto(&m.Spec)
}
// GetPlacementStatus returns the placement status.
func (m *ResourcePlacement) GetPlacementStatus() *PlacementStatus {
return &m.Status
}

- Add CEL immutability validations to CRP Spec (matching v1beta1):
  policy cannot be removed once set, statusReportingScope immutable,
  placementType immutable, and namespace selector requirement when
  StatusReportingScope=NamespaceAccessible
- Fix contradictory +kubebuilder:validation:Required + omitempty on
  ClusterResourcePlacementStatus fields (PlacementStatus, LastUpdatedTime)
- Add missing +kubebuilder:object:root=true to ResourcePlacementList
  and ClusterResourcePlacementStatusList

Signed-off-by: Ryan Zhang <yangzhangrice@hotmail.com>
…esilient

Two CI fixes:

1. Makefile: pin GOIMPORTS_VER from 'latest' to v0.42.0.
   golang.org/x/tools v0.43.0 (latest) now requires Go >= 1.25.0, but
   CI runs Go 1.24.13, causing the unit-and-integration-tests job to
   fail at setup with:
     go: golang.org/x/tools/cmd/goimports@latest: requires go >= 1.25.0

2. pkg/controllers/workgenerator/envelope.go: when multiple work objects
   are found for the same envelope (possible transiently if the same CRP
   name is reused quickly after deletion), self-heal by deleting all but
   the most recently created work and proceeding, instead of returning an
   UnexpectedBehaviorError that causes WorkSynchronized=False on the
   binding. The previous hard-fail path caused e2e test flakiness in the
   'mixed availability statefulset' case where cluster-1's workgenerator
   would encounter leftover work objects from the prior test context.

Signed-off-by: Ryan Zhang <yangzhangrice@hotmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature request: provide mechanism for namespace-only users to view status for ClusterResourcePlacement

5 participants