Replies: 1 comment 2 replies
-
That appears to be reasonable performance overall, but those metrics are only directional, and different deployment environments (as you are seeing) may in practice see better or worse performance than those metrics imply. Adding more training data might (in particular, negative data) might help, but there is no guarantee. This notebook describes the training process in more detail, and by adjusting various parameters and adding more data performance can likely be improved. This does require some knowledge of machine learning, however, and unfortunately there is not simple process that can be followed. One option that may help with false activations in particular are custom verifier models. I would start there and see if that improves performance substantially. |
Beta Was this translation helpful? Give feedback.
-
Im trying to train a wake word.I can get ot trained fairly well, but I still get some false activations. If i go any higher on the false_activation_penalty I have to really speak loudly to get it to activate.
I see this statement-"but if you are interested in adding even more, feel free to extend this notebook to download the full datasets". Would that help? How do I accomplish this?
My best effort so far is setting the first 2 sliders at 50,000 and the false_activation_penalty to 1850
The resulting info:
Final Model Accuracy: 0.8066999912261963
Final Model Recall: 0.6137999892234802
Final Model False Positives per Hour: 0.6194690465927124
I dont know if this is good, average or bad..
The false activations annoy the wife, so Im trying to get this resolved..
Beta Was this translation helpful? Give feedback.
All reactions