-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MTP Extension - Flaky Tests #4931
Comments
What is the challenge you see here? Reading what you wrote, it sounds to me as an extension that registers additional parameter, and consumes the tests to run via that. (I don't think extensions can currently control execution on that level, but let's put that aside.) |
Not quite sure I understand your reply. Your retry extension can tell which tests have failed. If we could attach a persistent storage (provided by the user via an interface) we could compare this run to previous runs, and then flag flaky tests via warnings or information logs |
When a test run finishes, it's stores it's passed/failed/skipped somewhere via that interface. And when ran again, it does the same and also compares against the previous X records |
There was some context missing for me :) I can see this working in multiple ways, like storing the state next to the executable, so the extension can pick it up in subsequent runs. |
I think it'd be very useful for a lot of people. Just the question around auto registration since it's need user input (the interface) |
Yeah and that is the part I am failing to understand how you meant that and where is the challenge there. Do you have some code example of what the problem would be? |
Well for instance you currently can install the TRX extension and invoke it via the cli.. you couldn't do that for a flaky test extension if you needed to supply it and interface |
I am also not quite sure to understand what's your question/intent. Would you like us to provide a service for story state/info about test results across runs? Are you asking how you could build you own extension that would save data? |
Yeah building an extension that can compare data across runs. Like @nohwnd said, you could store it next to the executable, but for thinks like GitHub pipelines, you'd need to commit this data after every run which may not be ideal. So I thought the user could specify how to store and retrieve the test results via an interface so they can use any storage mechanism (e.g. cosmosdb, blob storage) But since extensions are usually auto registered, and users don't define their own entry points, I don't know how you'd achieve that. Does that make more sense? |
It does! Thanks
This is part of the suggestions I have made but so far no priority was given on it.
This is a mistake to me. We had to do that to ease transition from VSTest but my ideal design is that the main is explicit in the user project, like it's the case for ASP.NET, Console... We should stop trying to be to magic and hide things to users, that's the main reason why we end up building complex infrastructure for things that should be easy for users.
As stated above, I'd go with explicit main but if you want to keep the hidden/generated mechanism then I'd expose the various supports you want to provide and users have to define the source and args by command line or through the json. |
This would need a backing storage that can persist in between test runs.
This is obviously doable by creating an interface that the user consumes. But the way extensions are set up is that they're auto registered aren't they?
Have you got mechanisms for registering extensions that might require input, without rewriting your own entry method?
The text was updated successfully, but these errors were encountered: