Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update constants to use values from CODATA2022 #5661

Open
wants to merge 7 commits into
base: development
Choose a base branch
from

Conversation

dpgrote
Copy link
Member

@dpgrote dpgrote commented Feb 12, 2025

The values of the physical constants were from CODATA 2018. These should be updated to the current accepted values as specified in CODATA 2022.

This breaks many CI benchmarks since the checks are at a higher precision than the changes in the constants.

Note that scipy recently updated the constants to use CODATA 2022 (in version 1.15.0 on January 3). This may cause problems in the CI tests. However, we use Ubuntu 20.4 to run the tests, and there, the version of scipy is 1.3.3 which uses constants from CODATA 2014!

These CI tests needed to be updated.

  • test_rz_galilean_psatd_current_correction_psb needed the charge conservation tolerance increase. This seems to be chance. Running this test locally (on Mac) passes the tolerance check

@EZoni
Copy link
Member

EZoni commented Feb 12, 2025

Thanks, Dave.

If there is a large number of benchmarks that need to be reset, this could be a good opportunity to test again our tool Tools/DevUtils/update_benchmarks_from_azure_output.py and its instructions in our documentation.

In theory, I updated and tested the tool manually in #5372. However, it is not tested automatically yet.

@dpgrote
Copy link
Member Author

dpgrote commented Feb 12, 2025

Thanks, Dave.

If there is a large number of benchmarks that need to be reset, this could be a good opportunity to test again our tool Tools/DevUtils/update_benchmarks_from_azure_output.py and its instructions in our documentation.

In theory, I updated and tested the tool manually in #5372. However, it is not tested automatically yet.

Thanks @EZoni ! It worked and was easy to do. BTW, to download the raw log file, I copied the URL from he location bar and pasted it into the curl command, "curl https://dev.azure.com/ECP-WarpX/... > raw_log", making it easy to download it.

Note that almost all of the changes in the benchmarks are small as expected, ~1.e-9 or smaller. One exception is the test_3d_beam_beam_collision test with errors of order 10% are seen, presumably because it runs a long time allowing the differences to grow.

@EZoni
Copy link
Member

EZoni commented Feb 13, 2025

Note that almost all of the changes in the benchmarks are small as expected, ~1.e-9 or smaller. One exception is the test_3d_beam_beam_collision test with errors of order 10% are seen, presumably because it runs a long time allowing the differences to grow.

I agree. I think that test has relatively large tolerances anyways, if I remember correctly. @aeriforme, what do you think?

@EZoni
Copy link
Member

EZoni commented Feb 13, 2025

Thanks @EZoni ! It worked and was easy to do. BTW, to download the raw log file, I copied the URL from he location bar and pasted it into the curl command, "curl https://dev.azure.com/ECP-WarpX/... > raw_log", making it easy to download it.

Thanks for pointing this out! I added this hint to our documentation in #5663.

@ax3l ax3l added the component: core Core WarpX functionality label Feb 19, 2025
@ax3l ax3l mentioned this pull request Feb 19, 2025
3 tasks
Copy link
Member

@lucafedeli88 lucafedeli88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dpgrote ! I think that updating constants to CODATA 2022 is a good idea.

I've reviewed the logs of the tests that have failed.

I agree that issues with test_3d_beam_beam_collision.checksum (and also with test_2d_collision_xz_picmi.checksum ) are likely due to the fact that these tests are relatively long.

We need to investigate a bit better the cases test_1d_ohm_solver_ion_beam_picmi, test_1d_ohm_solver_ion_beam_picmi, and test_3d_qed_schwinger_2 show non-negligible discrepancies (I will have a look at the QED-related one).

We need to increase the tolerance of several analysis scrips, since they seem to be a bit too strict :
test_2d_theta_implicit_jfnk_vandb, test_2d_theta_implicit_jfnk_vandb_filtered, test_2d_theta_implicit_jfnk_vandb_picmi, test_2d_theta_implicit_strang_psatd, test_2d_pec_field_insulator_implicit_restart,test_3d_particle_boundaries, test_3d_load_external_field_grid_picmi, test_3d_load_external_field_grid_picmi,test_3d_particle_fields_diags , test_3d_particle_fields_diags, test_3d_reduced_diags.

@lucafedeli88
Copy link
Member

Thanks @dpgrote ! I think that updating constants to CODATA 2022 is a good idea.

I've reviewed the logs of the tests that have failed.

I agree that issues with test_3d_beam_beam_collision.checksum (and also with test_2d_collision_xz_picmi.checksum ) are likely due to the fact that these tests are relatively long.

We need to investigate a bit better the cases test_1d_ohm_solver_ion_beam_picmi, test_1d_ohm_solver_ion_beam_picmi, and test_3d_qed_schwinger_2 show non-negligible discrepancies (I will have a look at the QED-related one).

We need to increase the tolerance of several analysis scrips, since they seem to be a bit too strict : test_2d_theta_implicit_jfnk_vandb, test_2d_theta_implicit_jfnk_vandb_filtered, test_2d_theta_implicit_jfnk_vandb_picmi, test_2d_theta_implicit_strang_psatd, test_2d_pec_field_insulator_implicit_restart,test_3d_particle_boundaries, test_3d_load_external_field_grid_picmi, test_3d_load_external_field_grid_picmi,test_3d_particle_fields_diags , test_3d_particle_fields_diags, test_3d_reduced_diags.

I can provide a possible explanation for the QED-related test. Looking at the analysis script I found this comment:

elif test_number == "2":
    # Second Schwinger test with stronger EM field. Many pairs are created and a Gaussian
    # distribution is used to get the weights of the particles. This is the most sensitive test
    # because the relative std is extremely low.
    Ex_test = 1.0e18
    Bx_test = 1679288857.0516706
    By_test = 525665014.1557486
    Bz_test = 1836353079.9561853

Note that the analysis script uses these constants (CODATA 2014, I think):

c = 299792458.
m_e = 9.10938356e-31
e = 1.6021766208e-19
hbar = 1.054571800e-34

while PICSAR-QED uses (CODATA 2018):

template<typename RealType = double>
constexpr auto light_speed = RealType(299792458.);
    
template<typename RealType = double>
constexpr auto electron_mass = RealType(9.1093837015e-31);

template<typename RealType = double>
constexpr auto elementary_charge = RealType(1.602176634e-19);
    
template<typename RealType = double>
constexpr auto reduced_plank = RealType(1.054571817e-34);

@dpgrote
Copy link
Member Author

dpgrote commented Feb 20, 2025

@lucafedeli88 Thanks for looking over this PR. I think a number of the issues are related to the use of scipy.constants in the analysis scripts, since the constants are inconsistent, i.e. still the CODATA2018 values. Until this issue is resolved, I don't think this PR should be merged. Unfortunately, there is no easy solution. The simplest is probably to use the most recent version of ubuntu for the CI tests, which uses the most recent scipy with updated constants.

A more robust longer term solution would be to create a new light-weight Python module that has the constants with the values consistent with the ones in C++, and then use this everywhere instead of relying on scipy.constants. This would guarantee that the values are always consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: core Core WarpX functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants