Issue
This Content is from Stack Overflow. Question asked by anurag
When developing a custom autograd::Function
, I can write something like this:
struct my_func: torch::autograd::Function<my_func>{
static torch::Tensor forward(torch::autograd::AutogradContext *ctx, //param list)
{
// do forward computation
ctx->save_for_backward({//some list of tensors});
return a_tensor;
}
static torch::autograd::tensor_list backward(torch::autograd::AutogradContext *ctx, torch::autograd::tensor_list grad_out){
auto saved = ctx->get_saved_variables();
// do backward computation
return {grad_tensor list};
}
};
struct GNNLayer : torch::nn::Module{
torch::Tensor forward(//paramlist) {
torch::Tensor x = my_func::apply(//arg list);
return layer_output;
}
};
What I want to do now is write multiple such autograd::Function
and invoke them inside my nn::Module
while passing the gradients between them. What is the syntax to do that?
Solution
This question is not yet answered, be the first one who answer using the comment. Later the confirmed answer will be published as the solution.
This Question and Answer are collected from stackoverflow and tested by JTuto community, is licensed under the terms of CC BY-SA 2.5. - CC BY-SA 3.0. - CC BY-SA 4.0.