Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTP Extension - Flaky Tests #4931

Open
thomhurst opened this issue Feb 6, 2025 · 10 comments
Open

MTP Extension - Flaky Tests #4931

thomhurst opened this issue Feb 6, 2025 · 10 comments
Labels
Area: MTP Extensions Area: MTP Belongs to the Microsoft.Testing.Platform core library Discussion

Comments

@thomhurst
Copy link
Contributor

This would need a backing storage that can persist in between test runs.

This is obviously doable by creating an interface that the user consumes. But the way extensions are set up is that they're auto registered aren't they?

Have you got mechanisms for registering extensions that might require input, without rewriting your own entry method?

@Youssef1313 Youssef1313 added Area: MTP Belongs to the Microsoft.Testing.Platform core library Area: MTP Extensions labels Feb 6, 2025
@nohwnd
Copy link
Member

nohwnd commented Feb 6, 2025

What is the challenge you see here? Reading what you wrote, it sounds to me as an extension that registers additional parameter, and consumes the tests to run via that.

(I don't think extensions can currently control execution on that level, but let's put that aside.)

@thomhurst
Copy link
Contributor Author

Not quite sure I understand your reply. Your retry extension can tell which tests have failed. If we could attach a persistent storage (provided by the user via an interface) we could compare this run to previous runs, and then flag flaky tests via warnings or information logs

@thomhurst
Copy link
Contributor Author

When a test run finishes, it's stores it's passed/failed/skipped somewhere via that interface. And when ran again, it does the same and also compares against the previous X records

@nohwnd
Copy link
Member

nohwnd commented Feb 6, 2025

There was some context missing for me :)

I can see this working in multiple ways, like storing the state next to the executable, so the extension can pick it up in subsequent runs.

@thomhurst
Copy link
Contributor Author

I think it'd be very useful for a lot of people. Just the question around auto registration since it's need user input (the interface)

@nohwnd
Copy link
Member

nohwnd commented Feb 6, 2025

Yeah and that is the part I am failing to understand how you meant that and where is the challenge there. Do you have some code example of what the problem would be?

@thomhurst
Copy link
Contributor Author

Well for instance you currently can install the TRX extension and invoke it via the cli.. you couldn't do that for a flaky test extension if you needed to supply it and interface

@Evangelink
Copy link
Member

I am also not quite sure to understand what's your question/intent. Would you like us to provide a service for story state/info about test results across runs? Are you asking how you could build you own extension that would save data?

@thomhurst
Copy link
Contributor Author

Yeah building an extension that can compare data across runs.

Like @nohwnd said, you could store it next to the executable, but for thinks like GitHub pipelines, you'd need to commit this data after every run which may not be ideal.

So I thought the user could specify how to store and retrieve the test results via an interface so they can use any storage mechanism (e.g. cosmosdb, blob storage)

But since extensions are usually auto registered, and users don't define their own entry points, I don't know how you'd achieve that.

Does that make more sense?

@Evangelink
Copy link
Member

Does that make more sense?

It does! Thanks

building an extension that can compare data across runs.

This is part of the suggestions I have made but so far no priority was given on it.

users don't define their own entry points

This is a mistake to me. We had to do that to ease transition from VSTest but my ideal design is that the main is explicit in the user project, like it's the case for ASP.NET, Console... We should stop trying to be to magic and hide things to users, that's the main reason why we end up building complex infrastructure for things that should be easy for users.

I don't know how you'd achieve that

As stated above, I'd go with explicit main but if you want to keep the hidden/generated mechanism then I'd expose the various supports you want to provide and users have to define the source and args by command line or through the json.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: MTP Extensions Area: MTP Belongs to the Microsoft.Testing.Platform core library Discussion
Projects
None yet
Development

No branches or pull requests

4 participants