-
Notifications
You must be signed in to change notification settings - Fork 2k
Revert "[BUG]: increase max payload size of log service (Go) (#4534)" #4540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit 3028a39.
|
This PR reverts a previous change that increased the max payload size of the log service (Go). The reversion is necessary as the original change led to lost metrics for the log-service. The original fix will be re-implemented differently after observability is restored. This summary was automatically generated by @propel-code-bot |
Reviewer ChecklistPlease leverage this checklist to ensure your code review is thorough before approving Testing, Bugs, Errors, Logs, Documentation
System Compatibility
Quality
|
| var listener net.Listener | ||
| listener, err = net.Listen("tcp", ":"+config.PORT) | ||
| if err != nil { | ||
| log.Fatal("failed to create grpc server", zap.Error(err)) | ||
| log.Fatal("failed to listen", zap.Error(err)) | ||
| } | ||
| s := grpc.NewServer(grpc.UnaryInterceptor(sharedOtel.ServerGrpcInterceptor)) | ||
| healthcheck := health.NewServer() | ||
| healthgrpc.RegisterHealthServer(s, healthcheck) | ||
|
|
||
| log.Info("log service started") | ||
| logservicepb.RegisterLogServiceServer(s, server) | ||
| log.Info("log service started", zap.String("address", listener.Addr().String())) | ||
| go leader.AcquireLeaderLock(ctx, func(ctx context.Context) { | ||
| go purging.PerformPurgingLoop(ctx, lr) | ||
| go metrics.PerformMetricsLoop(ctx, lr) | ||
| }) | ||
| if err := s.Serve(listener); err != nil { | ||
| log.Fatal("failed to serve", zap.Error(err)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
When switching to direct gRPC server setup in main.go, we should make sure to properly clean up resources upon termination. Currently, there's no mechanism to handle graceful shutdown of the server. Consider adding signal handling to properly shut down the gRPC server when the service is being terminated.
| if OPTL_TRACING_ENDPOINT != "" { | ||
| otel.InitTracing(context.Background(), &otel.TracingConfig{ | ||
| Service: name, | ||
| Service: "sysdb-service", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[CriticalError]
In the grpcutils package, the tracing service name is now hardcoded to "sysdb-service" regardless of what service is actually using it. This will cause tracing to be incorrectly labeled for any service that isn't actually the sysdb-service. The name parameter should be used instead, as it was in the previous version.
| "github.com/chroma-core/chroma/go/pkg/utils" | ||
| libs "github.com/chroma-core/chroma/go/shared/libs" | ||
| "github.com/chroma-core/chroma/go/shared/otel" | ||
| sharedOtel "github.com/chroma-core/chroma/go/shared/otel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
There's a double import of the otel package in main.go - one with the default name and one with a custom name, which creates unnecessary confusion. You should use a single consistent import.
…core#4534)" (chroma-core#4540) This reverts commit 3028a39. After we deployed this commit, we lost metrics for log-service, so we suspect it has something to do with it. I will re-enable the payload size fix one way or the other, but want to get back to working observability for the time being.
This reverts commit 3028a39.
After we deployed this commit, we lost metrics for log-service, so we suspect it has something to do with it. I will re-enable the payload size fix one way or the other, but want to get back to working observability for the time being.