Update the FAQ docs.#1003
Conversation
jspeerless
left a comment
There was a problem hiding this comment.
Some minor changes - but otherwise LGTM!
|
|
||
| evaluators = project.predictor_evaluations.default(predictor_id=predictor.uid) | ||
|
|
||
| Even if the predictor hasn't been registered to platform yet: |
There was a problem hiding this comment.
| Even if the predictor hasn't been registered to platform yet: | |
| You can evaluate your predictor even if it hasn't been registered to platform yet: |
| Working With Evaluators | ||
| ======================= | ||
|
|
||
| You can also run your predictor against a list of specific evaluators: |
There was a problem hiding this comment.
I think maybe we should just add a note that Evaluators (such as CrossValidationEvaluators) can be configured the same as they always have. The only difference is that they can be executed directly against a predictor to get evaluation results without the need for a registered Predictor Evaluation Workflow.
518e98e to
9dd3074
Compare
Drop the data manager and v3.0 migration docs, as they shouldn't be needed any more. Adds a FAQ to clearly demonstrate the usage of predictor_evaluators.
9dd3074 to
5f24de8
Compare
kroenlein
left a comment
There was a problem hiding this comment.
What's written looks good.
The Example of working with the AI Engine still references
workflow = PredictorEvaluationWorkflow(
name='workflow that evaluates y',
evaluators=[evaluator]
)rather than the new workflow. But maybe that's a different PR.
Good catch! I'll knock that out in a quick follow-up, so James can release earlier if he wishes. |
Never mind. Those docs are only published on release, and I've already made those (and many more) docs changes in another PR. So we're all set. |
Drop the data manager and v3.0 migration docs, as they shouldn't be needed any more. Adds a FAQ to clearly demonstrate the usage of predictor_evaluators.
PR Type:
Adherence to team decisions