Replies: 3 comments 5 replies
-
I've added a masking image to motion detection - with 'automask' (sets mask from next N motion sense runs). enable/set maskWCsetMotiondetect8 0-nnnnn |
Beta Was this translation helpful? Give feedback.
-
Brief comparison of the two drivers:tasmota32-webcam 13.1.0 - current releaseParams used: AIThinker style, wcresolution 5, wcclock 20 Motion only acessible by using scripts (special build). tasmota32-webcam PRPR binary: tasmota32-webcam-PR.zip Multiple web sessions available. (with 2 running ~20 fps) Additional features (some new, some based on existing script related code) Advanced additional features: thing that the new driver enables people to do:timelapse with mutliple possibilites
|
Beta Was this translation helpful? Give feedback.
-
It would be useful to understand all the existing use cases for ESP32cam - that could advise on the focus for any documentation changes and example Berry snippets, etc. |
Beta Was this translation helpful? Give feedback.
-
Hi All,
I've spent the last few weeks working on the webcam driver.
It now has all the features I could think of easily - the ones that I thought could be used from Berry scripting.
It's very stable on my (notoriously unreliable) cam.
It's here:
https://github.com/btsimonh/Tasmota/blob/webcam2023/tasmota/tasmota_xdrv_driver/xdrv_99_esp32_webcamberry.ino
But I don't want to PR until we've got some feedback, and maybe additional ideas taken into account.
VERY brief overview:
It runs all frame grabbing in a thread, so no more possible conflicts, and also a bit more performance.
I've added TAS commands for some of the functions that were previpously only available to scripts (not to berry...).
I've added some image conversion as TAS commands.
It SHOULD still work with (non-berry) scripting (apart from maybe the USE_WEBCAM #define).
Exxample: you can get actual pixels in berry by
WcGetFrame1 (reads the next frame to buffer)
WcConvertFrame1 5 0 (decode the JPEG buffer to RGB888 at scale 1:1)
Wcgetpicstore1 (returns the actual buffer address, length, width, height, format).
(you can download the raw pixels at this point from http://wc.jpeg?p=1)
You could then draw on the image!.... and then
WcConvertFrame1 4 0 (encode to JPEG)
access the resulting jpeg at http://wc.jpeg?p=1
Or... you could use the image in Tensorflow....
It also implements three forms of motion detection:
1/ JPEG size change. No performance Impact (i.e. will run at the cam framerate)
2/ accumulated pixel difference (as per original motion, only accessible through scripts).
3/ pixel difference threshold -> count of pixels over threshold.
The motion detection is somehwat faster (300ms vs 1.3s for a large frame) due to a modified JPEG decode to decode direct to monochrome).
You can access the motion buffers (re-encoded to jpeg) via the browser, and enable a diff buffer.
You can access the motion pixels with wcGetMotionPixels (gives addr and len - for berry?)
If informs berry of motion (webcan.motion is called if present).
If it gets an error at start, the error is displayed in the main menu.
You can turn off and on menu video (streaming takes some bandwidth... if you don't want to, you don't need to).
I still get some 0x105 at soft restart occasionally - the cross to bear from having an AIThinker cam module. But it's rare.
I did try to get raw pixels from the camera (more efficient if that's all you want), but failed.
So, what would you do with such features in a Berry script? What's missing?
Any testing, ideas, thoughts welcome.
br,
Simon
Beta Was this translation helpful? Give feedback.
All reactions