mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-14 17:41:37 +08:00
made a couple of changes
This commit is contained in:
@ -43,10 +43,10 @@ class RHNCell(Module):
|
|||||||
$\odot$ stands for element-wise multiplication.
|
$\odot$ stands for element-wise multiplication.
|
||||||
|
|
||||||
Here we have made a couple of changes to notations from the paper.
|
Here we have made a couple of changes to notations from the paper.
|
||||||
To avoid confusion with time, the gate is represented with $g$,
|
To avoid confusion with time, gate is represented with $g$,
|
||||||
which was $t$ in the paper.
|
which was $t$ in the paper.
|
||||||
To avoid confusion with multiple layers we use $d$ for depth and $D$ for
|
To avoid confusion with multiple layers we use $d$ for depth and $D$ for
|
||||||
total depth instead of $l$ and $L$ from paper.
|
total depth instead of $l$ and $L$ from the paper.
|
||||||
|
|
||||||
We have also replaced the weight matrices and bias vectors from the equations with
|
We have also replaced the weight matrices and bias vectors from the equations with
|
||||||
linear transforms, because that's how the implementation is going to look like.
|
linear transforms, because that's how the implementation is going to look like.
|
||||||
@ -57,7 +57,7 @@ class RHNCell(Module):
|
|||||||
def __init__(self, input_size: int, hidden_size: int, depth: int):
|
def __init__(self, input_size: int, hidden_size: int, depth: int):
|
||||||
"""
|
"""
|
||||||
`input_size` is the feature length of the input and `hidden_size` is
|
`input_size` is the feature length of the input and `hidden_size` is
|
||||||
feature length of the cell.
|
the feature length of the cell.
|
||||||
`depth` is $D$.
|
`depth` is $D$.
|
||||||
"""
|
"""
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
Reference in New Issue
Block a user