[clangd] Add support for multiple DecisionForest model experiments.
With every incremental change, one needs to check-in new model upstream. This also significantly increases the size of the git repo with every new model. Testing and comparing the old and previous model is also not possible as we run only a single model at any point. One solution is to have a "staging" decision forest which can be injected into clangd without pushing it to upstream. Compare the performance of the staging model with the live model. After a couple of enhancements have been done to staging model, we can then replace the live model upstream with the staging model. This reduces upstream churn and also allows us to compare models with current baseline model. This is done by having a callback in CodeCompleteOptions which is called only when we want to use a decision forest ranking model. This allows us to inject different completion model internally. Differential Revision: https://reviews.llvm.org/D90014
Loading
Please sign in to comment