Conversation
This was done for a more readable flow of API handlers. This is in preparation for adding more API handlers, specifically for enhanced camera log viewing.
This was done so that the test wouldn't panic when it fails. It's still unclear why it failed and is now passing.
The intention here is to improve the function of events that represent erroneous conditions and make error information more available and clearer. We're also trying to use more consistent patterns around error conditions, and do more central notification on these error events. Events representing erroneous conditions are errors themselves. They also hold errors instead of basic strings. This means we can chain error events/errors and maintain a history of the errors that led us to where we are. As such we have unwrapping functionality to find the initial error event type. We've also added a Kind() method to the interface that gives us the corresponding notify.Kind so that we can perform a notification of the correct kind. Finally we've fixed some fundemental issues around storing events for processing; we weren't properly storing the error events internal error information, and would therefore loose this on the next SM tick.
This was done because the errors in question mean the test cannot meaningfully continue. This prevents a panic.
bench: fix TestGetScalars panic
This was done to make them more reliable and not depend on others tests. This meaning the get and fetch tests will take longer but to fix that we could do a multi put.
Also small changes to comments for PR
ao-david
left a comment
There was a problem hiding this comment.
This needs a rebase before merging, as I think that commits from a different change have made their way in. But otherwise LGTM
| cmd/decode/* | ||
| cmd/cloudblue/* | ||
| cmd/oceancast/* | ||
| cmd/roadmap/* No newline at end of file |
There was a problem hiding this comment.
do we not want to commit this?
There was a problem hiding this comment.
No harm in adding to the ignore file I think. I know they're experimental projects that aren't on main but my thinking was, why not add them? It helps me deploy more cleanly because I don't want to get rid of some of those project folders.
|
|
||
| // Print progress every 10%. | ||
| if i%(minutesInDay/10) == 0 { | ||
| t.Logf("put %d/%d scalars", i, minutesInDay) |
There was a problem hiding this comment.
Do we really need this log? There is already a lot of logging in the test suite which can make it hard to find what failed
There was a problem hiding this comment.
Because this is a long running test, I think it's useful, it only results in 10 logs since it only does 10% increments, that's fine IMO.
bench: split huge API handler into smaller handlers
This was done to make them more reliable and not depend on others tests. This means the get and fetch tests will take longer, but to fix that we could do a multi put.
Closes #542