Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hidden physics #45

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Hidden physics #45

wants to merge 9 commits into from

Conversation

DKreuter
Copy link
Collaborator

  • added hidden physics approach
  • @nheilenkoetter: could you please check the new functionality? Do they fit the general idea of model implementation?
  • @johannes-mueller : Could you double check the added test? Thanks 🙏

birupakshapal2017 and others added 9 commits December 10, 2024 05:46
Hidden physics files, merged for checking unit tests
Signed-off-by: Daniel Kreuter <[email protected]>

# Conflicts:
#	src/torchphysics/problem/conditions/__init__.py
#	src/torchphysics/problem/conditions/condition.py
#	tests/test_conditions.py
set matplotlib version to <3.10
Signed-off-by: Daniel Kreuter <[email protected]>
@DKreuter DKreuter added the enhancement New feature or request label Dec 17, 2024
@DKreuter DKreuter self-assigned this Dec 17, 2024
@nheilenkoetter
Copy link
Contributor

Hi, @DKreuter thank you for creating the PR.

There are two comments from my side:

  • In my opinion, it would be better to provide a more general concept for the conditions. The two implemented HP Conditions are very similar to the existing ones and the concepts actually aren't exclusive for HP models. Therefore, we should build something like a MultiModuleCondition that inherits from a class which is common with the SingleModuleCondition. I will provide a suggestion during the next days, it should be quite simple to extend the existing code in a nice way here.
  • I am not sure about all the Linter fixes/changes in the code (beyond the HP parts). These are many code changes which are not a hundred percent consistent throughout the repo. Maybe we should separate the linter fixes / notational consistency from the functionality changes? And in addition, the two new classes do not follow our CamelCase convention for class names. This can be fixed when working on my above suggestion.

I hope this helps :)

@nheilenkoetter
Copy link
Contributor

I've now found some time to take a closer look on the implemented Conditions:

For the HPM_EquationLoss_at_Sampler, I think the overall idea of the user evaluating the model in the residual function is indeed a helpful class for many setups - not only for hidden physics, but also for other more complex training configurations. So a renaming and a slight restructuring could make this a better fit for the library. I will attach a (non-tested) suggestion for a class ResidualCondition to this reply. This one also includes some data loading option for where this might be helpful. I am optimistic that this will also work right away for the hidden physics examples that you have provided, so this would minimize the additional workload. What do you think @birupakshapal2017 ?

I don't see the use case for the HPM_EquationLoss_at_DataPoints class. In its forward pass, the y_reference is not used, i.e. only the collocation points are loaded from data. For this use case, we have implemented the tp.samplers.DataSampler. In the case that more flexibility is needed, it would imo be prettier to stay in this scheme and implement another sampler, and use DataConditions only for paired data.

`class ResidualCondition(Condition):
"""
A general condition that can be used with an arbitrary amount of modules and incorporate
a sampler as well as a data loader.

Parameters
----------
modules : list of torchphysics.models.Model
    The torch modules which should be optimized. This is necessary to register the parameters.
residual_fn : callable
    A user-defined function that computes the residual (unreduced loss) from
    inputs and outputs of the model, e.g. by using utils.differentialoperators
    and/or domain.normal
sampler : torchphysics.samplers.PointSampler, optional
    A sampler that creates the points in the domain of the residual function.
dataloader : torchphysics.utils.data.PointsDataLoader, optional
    A TP PointsDataLoader which supplies the iterator to load point-target pairs.
data_functions : dict, optional
    A dictionary of user-defined functions and their names (as keys). Can be
    used e.g. for right sides in PDEs or functions in boundary conditions.
parameters : torchphysics.models.Parameter, optional
    A Parameter that can be used in the residual_fn and should be learned in
    parallel, e.g. based on data (in an additional DataCondition).
error_fn : callable, optional
    Function that will be applied to the output of the residual_fn to compute
    the unreduced loss. Should reduce only along the last (i.e. space-)axis.
reduce_fn : callable, optional
    Function that will be applied to reduce the loss to a scalar. Defaults to
    torch.mean
name : str, optional
    The name of this condition which will be monitored in logging.
track_gradients : bool, optional
    Whether gradients w.r.t. the inputs should be tracked during training or
    not. Defaults to true, since this is needed to compute differential operators.
weight : float, optional
    The weight multiplied with the loss of this condition during training.
"""

def __init__(self, modules, residual_fn, sampler=None, dataloader=None, data_functions={},
             parameters=Parameter.empty(), error_fn=SquaredError, reduce_fn=torch.mean,
             name='residualcondition', track_gradients=True, weight=1.0):
    super().__init__(name=name, weight=weight, track_gradients=track_gradients)

    # models are only required to be registered, they are not used in the forward pass
    self.modules = torch.nn.ModuleList(modules)
    self.parameters = parameters
    self.register_parameter(name + '_params', self.parameter.as_tensor)

    # we allow to use only a single sampler or dataloader, or create the combination
    # of both. This ensures that points from different dimensions are combined in a valid way.
    if sampler is not None and dataloader is not None:
        # possible extension to allow combination of data and sampled points for different variables
        assert isinstance(sampler, PointSampler)
        assert isinstance(dataloader, PointsDataLoader)
        self.dataloader = sampler * dataloader # not implemented yet, might not work with adaptive samplers
        self.sampler = None
    elif sampler is not None:
        assert isinstance(sampler, PointSampler)
        self.sampler = sampler
        self.dataloader = None
    elif dataloader is not None:
        assert isinstance(dataloader, PointsDataLoader)
        self.dataloader = dataloader
        self.sampler = None

    self.residual_fn = UserFunction(residual_fn)
    self.error_fn = error_fn
    self.reduce_fn = reduce_fn

    if self.sampler is not None:
        if sampler.is_adaptive:
            self.last_unreduced_loss = None
        self.data_functions = self._setup_data_functions(data_functions, sampler)

def forward(self, device='cpu', iteration=None):
    if self.sampler is not None:
        if self.sampler.is_adaptive:
            x = self.sampler.sample_points(
                unreduced_loss=self.last_unreduced_loss,
                device=device)
            self.last_unreduced_loss = None
        else:
            x = self.sampler.sample_points(device=device)
        y = None
    elif self.dataloader is not None:
        try:
            batch = next(self.iterator)
        except (StopIteration, AttributeError):
            self.iterator = iter(self.dataloader)
            batch = next(self.iterator)
        x = batch[0].to(device)
        y = batch[1].to(device)
    else:
        x = None

    if self.track_gradients:
        x_coordinates, x = x.track_coord_gradients()
    else:
        x_coordinates = x.coordinates
    
    data = {}
    for fun in self.data_functions:
        data[fun] = self.data_functions[fun](x_coordinates)

    unreduced_loss = self.error_fn(self.residual_fn({**(y.coordinates if y is not None else {}),
                                                     **(x_coordinates if x is not None else {}),
                                                     **self.parameters.coordinates,
                                                     **data}))

    if self.sampler is not None and self.sampler.is_adaptive:
        self.last_unreduced_loss = unreduced_loss

    return self.reduce_fn(unreduced_loss)

def _move_static_data(self, device):
    if self.sampler is not None and self.sampler.is_static:
        for fn in self.data_functions:
            self.data_functions[fn].fun = self.data_functions[fn].fun.to(device)`

@DKreuter
Copy link
Collaborator Author

@birupakshapal2017 : Could you please check the comments from @nheilenkoetter . From my point of you we should integrate the proposed general condition approach. Please rewrite and your code.

@DKreuter DKreuter assigned DKreuter and unassigned DKreuter Jan 13, 2025
@Kangyukuan
Copy link

boschresearch/torchphysics/examples/pinn/data/heat-eq-inverse-data.npy,Could you please explain the composition of the data? I'm not quite clear about your data.

@Kangyukuan
Copy link

数据形状: torch.Size([30000, 4])
tensor([[ 1.3000, 4.2500, 2.7500, 34.6317],
[ 1.9000, 3.7500, 8.2500, 36.2271],
[ 4.2000, 3.5000, 4.7500, 15.6455],
[ 2.6000, 6.5000, 5.2500, 39.6830],
[ 3.8000, 5.5000, 9.7500, 7.3653],
[ 2.8000, 7.7500, 4.2500, 33.0970],
[ 1.3000, 7.2500, 1.5000, 25.8905],
[ 3.1000, 5.5000, 8.5000, 31.4913],
[ 3.9000, 3.7500, 8.0000, 22.1346],
[ 4.5000, 1.7500, 0.2500, 0.6929],
[ 4.7000, 9.5000, 5.7500, 11.4334],

@Kangyukuan
Copy link

thank you very much

@TomF98
Copy link
Contributor

TomF98 commented Jan 15, 2025

boschresearch/torchphysics/examples/pinn/data/heat-eq-inverse-data.npy,Could you please explain the composition of the data? I'm not quite clear about your data.

@Kangyukuan the data is of the shape (time, space_x, space_y, temperature), randomly selected from the results of FEM. Please open next time a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants