-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handler ErrorTracker.Integrations.Oban has failed and has been detached #63
Comments
According to the stack trace it looks like a problem with certain exceptions that do not report the file of your app in which happened. We will take a look and push a fix for it. Will keep you posted. Thanks for the early testing and reporting the issue :) |
I'm not able to reproduce this. I tried with several exception, including some that are specially difficult (' 1/0` or calling an unexistent function) without being able to reproduce it. According to the stacktrace you are getting from telemetry it looks like the exception has no stacktrace lines, which is weird. I can prepare a fix without reproducing it, but it would be best to be able to see why that happens. Do you know which kind of exception did happen at first place? 👀 @jaimeiniesta |
Thanks for taking a look! I'm not entirely sure. I suspect it's an Oban worker that saves screenshots to disk and then uploads them to S3, but I'm not sure. I'll try to find it. |
Hey, I've been able to reproduce this. This ScreenshotWorker, in case of timeouts, will end returning
If you launch the worker this is what happens:
|
Thanks for the detailed steps to reproduce it @jaimeiniesta ! The error makes sense because as the code of the worker is not generating an exception, no stack trace is returned from Oban. I have prepared a fix in #70 which i hope will be released soon 🚂 Feel free to test it if you want, or even use that branch temporarily if it is breaking your envs. And thanks again for so detailed issues. We are still polishing some parts and having reports from real users is really helpful 🤗 |
There was an error when an Oban worker returned an `{:error, reason}` tuple because Oban does not return an stacktrace (makes sense, as it is not an exception by itself). Our code was not prepared for that use case. I modified it to handle it and circumvent the needed for source file and source function information. After this change, empty stack traces and source information are not shown on the dashboard to avoid confusion. As an extra feature I have also added the reporting of the `state` of the job (which is mostly ever `failure` for us). Closes #63
The fix has been released in v0.2.4 a few seconds ago 🚀 |
Hey, I'm trying the latest version, 0.2.4 and it works great, thanks! I'm still thinking about what's the best strategy to deal with Oban errors, as I rely much on Oban's retry mechanism. For example, my {:error, :rate_limited} ->
{:snooze, 1}
{:error, :screenshot_timeout} ->
if corrected_attempt(job) >= @max_attempts do
{:cancel, :timeout}
else
{:error, :timeout}
end That is, for a rate limit we use Oban's snooze to automatically retry in 1 second. For a network timeout, we may cancel the job after the max attempts has been reached, or return with In this last case we don't really care about tracking the exception, because Oban is going to retry. We're only interested in tracking the error in the case of returning We have many other Oban workers that will fail often with network errors, timeouts etc., as our app is a web scraper. What would be the best approach? |
Hey @jaimeiniesta. I am currently working on a feature that will allow users to ignore errors that they don't want to be tracked. I think that this would solve your issue. You could just ignore those errors and they wouldn't be tracked and wouldn't show in the dashboard either. Update: the new feature is ready to review in #79 |
@crbelaus thanks, I've tried that branch and it works great, it's easy to define and ignore that matches on fields from the error or context, I would use that. 👍 |
Hi! I'm seeing this in the logs and I'm not sure why that's happening.
I'm using these versions:
Development logs
Staging server logs
The text was updated successfully, but these errors were encountered: