-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests #10
Comments
* Support for accessing and mutating the timestamp mode on the LiDAR The on-LiDAR `timestamp_mode` can now be set in the config file. Additionally, its state is fedback in the `GetMetadata` service. I HIL-tested these changes via visual inspection and everything seems to be working correctly. I see that in #10 there is a plan add some automated testing. I think this is a *very good* idea. I'll add some comments to that issue shortly. Closes: #34 * Adds TIME_FROM_ROS_RECEPTION as valid timestamp_mode * When using TIME_FROM_ROS_RECEPTION all data processors that process a given LiDAR scan will get the same ROS timestamp so they can be time correlated. * Made log message less alarming
I think it is time we add tests (beyond style checkers) to this repo. You have a list started above @SteveMacenski From my side, I am most interested in HIL testing (so, a sensor would need to be in the loop). If you are OK with it, I will assign myself to this to get a framework in place for doing so. I'll probably add a top-level CMake flag to enable HIL testing which, by default, will be OFF, so as to not interfere with the current automated tests that run on the ROS infrastructure. However, for developers testing locally with access to hardware, it will increase our level of confidence in the quality of the software overall and protect against any regressions as we move forward. Let me know if you are OK with me assigning myself to get some movement on this issue. Also, do you generally agree with a "enable HIL testing" switch on the CMake build (defaulted to OFF) to enable/disable this functionality. |
That makes sense, we can do that. I assume you'd have another I'd be interested in some block diagram of what you were thinking for the HIL testing. This is so I can get a good picture of what you're thinking and I can give some input to make sure we're really testing and not just sanity checking. Also, I don't know of any other ROS drivers with HIL testing (or really much testing at all) so it would be a good reference for others to adapt if it works out. I'd also be interested in helping out as I can. I'm overloaded right now with navigation2 / ROS2 / speaking engagements but I'm actively trying to get my backlog down so I can actually develop again. |
We already have this bit in the
Within that block, perhaps after the
I built extensive HIL tests into the IFM 3D camera SDK and then node-level integration tests in the ROS wrappers. For example the ROS2 wrapper. My plan would be to take inspiration from my previous work but adapt it to this project. For example, in the IFM SDK, we intentionally put all the heavy lifting into a ROS-independent interface, then wrapped it with ROS as a large part of the user-base were not ROS users. So, I'd have to adapt a bit here to account for the fact that this is ROS2 top-to-bottom. But that is not a problem. I would drive most of the test with (there is more, but the above should give a flavor) Then some Node-level testing as well using This was going to be my approach. It worked really well in the past, so, I would draw on that experience. |
Makes sense, that's what I was suggesting as well. ROS isn't really bottom to top, its really only in the top most layer, I was pretty careful about that. The ROS messages are a little lower, but its not dependent on ROS itself. At that level, its just a struct, it wasn't worth the copy to abstract it that low for these large of structs.
if those things were half the price... they'd be really useful 😄 Another case of 'I want to like this, but you make it hard' like the Jetson boards. I think just some short design doc on the testing or a diagram would be helpful to understand what's actually being tested (just data coming through, that this data is "correct" somehow, that the data is "typical" in terms of noise / quality, etc). |
It would be a separate topic to discuss, but how would you feel about modularization of the components themselves? Right now everything is built into the one shared object file
It is really good hardware -- industrial grade, not consumer grade. Albeit, pmd offers some consumer grade devices very cheap with the same imager embedded, but, the software interface would be different (Royale, and royale-ros). I think some future ifm products built on the same tech (both the imager and open source SDK) should be released soon. But, I am no longer involved directly so I cannot speak to it with any authority. If you are interested, I could get you in touch with the right people though.
My expectation is that we will have an ever-growing list of tests that will be run. I want to put the struts in place that make doing that frictionless. Each test should test one very specific thing and be named thusly. A test failure will make it obvious what assertion we made that did not work. I am not against documentation but I just think, particularly for tests, the docs will get out-of-date with the code rather quickly and not reflect the exhaustive nature of the tests. Additionally, I do not have any intention of creating every test up front. I think most important is a friction-free way of writing tests and then over time we will incrementally add more and more tests to the project to ensure the best reliability that we can. Perhaps the tests themselves should just be documented with Doxygen or the Python equivalent and we can auto-generate docs which will then, by design, remain in sync with the code? |
If you wanted to move into multiple libraries, we can do that. I do wonder the practical value (will anyone actually just use the lower libraries with a new application?) I'm not sure I'm super interested in IFM until the price point hits in the market for most robotics companies (200-300 range; realsense lines, astra, Mynt, Meere, etc). OK, but just something short about the design intent even if the details expand over time. Broad strokes and motive / scope. |
This would likely result in new ROS2 packages as part of this project (e.g., I can see a lot of practical value. For example, with a package like |
That's such a mess though. This isn't that complex of a package. Making I would agree that the cmdline tools could be a separate package if we had some. |
I think it may seem that way now, but, if (when) this package (and Ouster) scales, it will not "seem silly" at all. I would like to see this repo become the SDK for using Ouster LiDARs in autonomous systems. If we don't take the right steps now, someone else will or the software eco-system among Ouster LiDARs will splinter and that will in fact be a "mess" for everyone. Here is another motivating example from building the I think engineering in the architecture for scale now would be a very wise step, assuming you also want this repo to be the SDK for using Outer LiDARs in autonomous systems. We are now pretty far off the topic of "Testing". I was afraid that would happen. |
The scope of this driver is for ROS2 though. I'm not sure it would be appropriate to try to put so much into 1 project to be the general SDK for all uses of Ouster - that's what ouster_example and Ouster provided software is for. If you mean that ROS2 will be the defacto use for Ouster because of its pervasiveness, then yes, I get you on that. I would argue though that the vast majority of users will not be modifying any code, they're going to apt-get install it and move on as long as it works. I can't think of a time I've ever read the sensor drivers to most the sensors I use in ROS (except realsense because its written like total garbage). The splintering argument seems a little too slippery-slope. I'm not sure there's going to be some cult-following community around a lidar manufacturer for there to be community forums, tee shirts, and OusterCon. I don't think there's any precedence for something like that.
|
Our experience so far has been to download the code and discover significant end-to-end latency (~1.5 seconds in
OK. This is not what I am saying. I think you know that. I'm merely suggesting we modularize (over time) to separate concerns. To promote loose coupling and high cohesion among components. This will help to generalize the software for the diversity of applications that it will be used in. One of the core tenets of ROS2 is that it is production-ready for commercial uses. We are commercial users. We are communicating our needs and willing to put in the work to make it happen. We also want to align with you (original/primary author) to ensure that we get buy-in. We are in active development with an OEM partner (and several others in the pipe) that will be deploying in the second half of this year. The fleet sizes will be in the hundreds of vehicles. We will need centralized management tools. The example I used was not contrived. It was based on real-world experience and I see the pattern repeating again in our current work. I'm trying to get ahead of it while we can. If we can't scale this project to meet the customer's projected needs, they will likely pay us to build custom tooling, closed source, and they will own it. I'd rather create the tooling in the open and contribute it back to the community for all the reasons that I believe open-source is the right way to build infrastructure.
I don't disagree. The original work on
I think it is important. Let's work on it together. Sorry to "catch you by surprise". Not my intention. Really, this issue is about HIL testing. We need to get that in place more urgently than the modularization, but, the modularization is important. |
I'll work with you on this, I'm just trying to get my head aligned correctly with what we need to get done and figure out what the best way to handle it is. You clearly had a different vision for this than I. So your vision is a mono-repo containing the ROS2 drivers, CLI utilities, and other tools that may be required for working with the sensors. As such, it would likely make sense to rename this repository to better align with the scope. I'd like to know what the edges of the things in this project should be in your mind so we can have edges as to what belongs and what doesn't. I'm aggressive about rejecting feature creep so I want to know what the new scope I should be thinking in is. I'm a believer that people willing to put in the work ultimately should drive development direction. But doesn't mean I shouldn't ask questions along the way. Maybe we should move this over to an email chain and we get back to testing here. Back to testing. I think where we left things is for some short design doc or list of things you are interested in having tests cover and to what fidelity (e.g. I want to test the sensor produces 1024x10, but does that just mean data is flowing, or data quality as well, or stress testing it, or checking for latency, or checking you can switch between operation modes, etc). I want to also get an understanding of scope and depth here as well so we can align against it. You mentioned not having a strict set of requirements right now, that's fine, we can work in themes. I generally work in themes anyhow. |
That should cover the ros interfaces, the abstract interfaces, process loop, lifecycle
The text was updated successfully, but these errors were encountered: