Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noisy tests (incl. backtraces) makes assessing them hard #159

Open
chartgerink opened this issue Jul 25, 2024 · 0 comments
Open

Noisy tests (incl. backtraces) makes assessing them hard #159

chartgerink opened this issue Jul 25, 2024 · 0 comments

Comments

@chartgerink
Copy link
Member

I was trying to get a baseline of developing cleanepi by installing all dependencies, running the checks, and testing the library – during this process I noticed the tests are outputting a lot of information. As a result, I did not even realize the tests were passing.

For example, passing tests result in output such as:
Warning (test-standardize_subject_ids.R:219:3): check_subject_ids fails as expected
Using an external vector in selections was deprecated in tidyselect 1.1.0.
i Please use `all_of()` or `any_of()` instead.
  # Was:
  data %>% select(target_columns)

  # Now:
  data %>% select(all_of(target_columns))

See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
Backtrace:
     ▆
  1. ├─testthat::expect_warning(...) at test-standardize_subject_ids.R:219:3
  2. │ └─testthat:::expect_condition_matching(...)
  3. │   └─testthat:::quasi_capture(...)
  4. │     ├─testthat (local) .capture(...)
  5. │     │ └─base::withCallingHandlers(...)
  6. │     └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
  7. ├─cleanepi::check_subject_ids(...)
  8. │ └─cleanepi:::check_subject_ids_oness(data, target_columns) at cleanepi/R/standardize_subject_ids.R:54:3
  9. │   └─cleanepi::find_duplicates(data, id_col_name) at cleanepi/R/standardize_subject_ids.R:183:3
 10. │     └─... %>% ... at cleanepi/R/find_and_remove_duplicates.R:101:3
 11. ├─dplyr::select(., "row_id", "group_id", dplyr::everything())
 12. ├─dplyr::mutate(., group_id = dplyr::cur_group_id())
 13. ├─dplyr::group_by_at(., dplyr::vars(target_columns))
 14. │ └─dplyr:::manip_at(...)
 15. │   └─dplyr:::tbl_at_syms(...)
 16. │     └─dplyr:::tbl_at_vars(...)
 17. │       └─dplyr::tbl_vars(tbl)
 18. │         ├─dplyr:::new_sel_vars(tbl_vars_dispatch(x), group_vars(x))
 19. │         │ └─base::structure(...)
 20. │         └─dplyr:::tbl_vars_dispatch(x)
 21. ├─dplyr::select(., -"num_dups")
 22. ├─dplyr::filter(., .data$num_dups > 1L)
 23. ├─dplyr::arrange(., dplyr::pick(target_columns))
 24. ├─dplyr::mutate(., row_id = seq_len(nrow(data)))
 25. ├─dplyr::ungroup(.)
 26. ├─dplyr::mutate(., num_dups = dplyr::n())
 27. └─dplyr::group_by_at(., dplyr::vars(target_columns))
 28.   └─dplyr:::tbl_at_syms(.tbl, .vars, .include_group_vars = TRUE)
 29.     └─dplyr:::tbl_at_vars(...)
 30.       └─tidyselect::vars_select(tibble_vars, !!!vars)
 31.         └─tidyselect:::eval_select_impl(...)
 32.           ├─tidyselect:::with_subscript_errors(...)
 33.           │ └─base::withCallingHandlers(...)
 34.           └─tidyselect:::vars_select_eval(...)
 35.             └─tidyselect:::walk_data_tree(expr, data_mask, context_mask)
 36.               └─tidyselect:::eval_c(expr, data_mask, context_mask)
 37.                 └─tidyselect:::reduce_sels(node, data_mask, context_mask, init = init)
 38.                   └─tidyselect:::walk_data_tree(new, data_mask, context_mask)
 39.                     └─tidyselect:::eval_sym(expr, data_mask, context_mask)

I also observed this in the tests run on GitHub actions, so it is not just my machine. @Karim-Mane do you know how this ended up like this? I would consider it a great DX improvement to get this reduced, which will ultimately benefit the code produced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant