Skip to content
This repository was archived by the owner on Dec 29, 2022. It is now read-only.

Commit 5649411

Browse files
committed
See changelist
1 parent f09839f commit 5649411

13 files changed

Lines changed: 592 additions & 192 deletions

‎docs/Bookkeeper.md‎

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,9 @@ A single tensor that is the sum of all losses.
148148

149149
Calculates the exponential moving average.
150150

151+
TODO(): check if this implementation of moving average can now
152+
be replaced by tensorflows implementation.
153+
151154
Adds a variable to keep track of the exponential moving average and adds an
152155
update operation to the bookkeeper. The name of the variable is
153156
'%s_average' % name prefixed with the current variable scope.

‎docs/PrettyTensor.md‎

Lines changed: 58 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ Handle to this layer.
261261

262262
- - -
263263

264-
## <a name="batch_normalize"></a>batch_normalize(name=None, learned_moments_update_rate=None, variance_epsilon=None, scale_after_normalization=None, phase=Phase.train)
264+
## <a name="batch_normalize"></a>batch_normalize(name=None, learned_moments_update_rate=0.0003, variance_epsilon=0.001, scale_after_normalization=False, phase=Phase.train)
265265

266266

267267

@@ -378,7 +378,7 @@ A new PrettyTensor.
378378

379379
- - -
380380

381-
## <a name="conv2d"></a>conv2d(kernel, depth, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x394f9b0>, edges=SAME, batch_normalize=False, name=None)
381+
## <a name="conv2d"></a>conv2d(kernel, depth, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0>, edges=SAME, batch_normalize=False, name=None)
382382

383383

384384

@@ -406,7 +406,8 @@ The current head must be a rank 4 Tensor.
406406
* bias_init: An initializer for the bias or a Tensor.
407407
* edges: Either SAME to use 0s for the out of bounds area or VALID to shrink
408408
the output size and only uses valid input pixels.
409-
* batch_normalize: Set to True to batch_normalize this layer.
409+
* batch_normalize: Supply a BatchNormalizationArguments to set the
410+
parameters for batch normalization.
410411
* name: The name for this operation is also used to create/find the
411412
parameter variables.
412413

@@ -450,6 +451,53 @@ A loss.
450451
* ValueError: if labels is None or the type is not float or double.
451452

452453

454+
- - -
455+
456+
## <a name="depthwise_conv2d"></a>depthwise_conv2d(kernel, channel_multiplier, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0>, edges=SAME, batch_normalize=False, name=None)
457+
458+
459+
460+
Adds a depth-wise convolution to the stack of operations.
461+
462+
The current head must be a rank 4 Tensor.
463+
464+
#### Args:
465+
466+
467+
* kernel: The size of the patch for the pool, either an int or a length 1 or
468+
2 sequence (if length 1 or int, it is expanded).
469+
* channel_multiplier: Output channels will be a multiple of input channels.
470+
* activation_fn: A tuple of (activation_function, extra_parameters). Any
471+
function that takes a tensor as its first argument can be used. More
472+
common functions will have summaries added (e.g. relu).
473+
* stride: The strides as a length 1, 2 or 4 sequence or an integer. If an
474+
int, length 1 or 2, the stride in the first and last dimensions are 1.
475+
* l2loss: Set to a value greater than 0 to use L2 regularization to decay
476+
the weights.
477+
* init: An optional initialization. If not specified, uses Xavier
478+
initialization.
479+
* stddev: A standard deviation to use in parameter initialization.
480+
* bias: Set to False to not have a bias.
481+
* bias_init: An initializer for the bias or a Tensor.
482+
* edges: Either SAME to use 0s for the out of bounds area or VALID to shrink
483+
the output size and only uses valid input pixels.
484+
* batch_normalize: Supply a BatchNormalizationArguments to set the
485+
parameters for batch normalization.
486+
* name: The name for this operation is also used to create/find the
487+
parameter variables.
488+
489+
#### Returns:
490+
491+
Handle to the generated layer.
492+
493+
494+
#### Raises:
495+
496+
497+
* ValueError: If head is not a rank 4 tensor or the depth of the input
498+
(4th dim) is not known.
499+
500+
453501
- - -
454502

455503
## <a name="diagonal_matrix_mul"></a>diagonal_matrix_mul(init=None, stddev=None, l2loss=None)
@@ -648,7 +696,7 @@ A LayerWrapper with the flattened tensor.
648696

649697
- - -
650698

651-
## <a name="fully_connected"></a>fully_connected(size, activation_fn=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x394f9b0>, transpose_weights=False, name=None)
699+
## <a name="fully_connected"></a>fully_connected(size, activation_fn=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0>, transpose_weights=False, name=None)
652700

653701

654702

@@ -1098,7 +1146,7 @@ Computes the softmax.
10981146

10991147
- - -
11001148

1101-
## <a name="softmax_classifier"></a>softmax_classifier(class_count, labels=None, name=None, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x394f9b0>)
1149+
## <a name="softmax_classifier"></a>softmax_classifier(class_count, labels=None, name=None, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x251986e0>)
11021150

11031151

11041152

@@ -1129,7 +1177,7 @@ A tuple of the softmax's name and the loss tensor's name in m.bits.
11291177

11301178
- - -
11311179

1132-
## <a name="softmax_classifier_with_sampled_loss"></a>softmax_classifier_with_sampled_loss(num_classes, labels, num_sampled, num_true=None, sampled_values=None, remove_accidental_hits=True, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x394f9b0>, name=softmax_classifier)
1180+
## <a name="softmax_classifier_with_sampled_loss"></a>softmax_classifier_with_sampled_loss(num_classes, labels, num_sampled, num_true=None, sampled_values=None, remove_accidental_hits=True, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x251986e0>, name=softmax_classifier)
11331181

11341182

11351183

@@ -1369,13 +1417,16 @@ Creates a scope for the defaults that are used in a `with` block.
13691417

13701418
* `activation_fn`:
13711419
* [conv2d](PrettyTensor.md#conv2d)
1420+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
13721421
* [fully_connected](PrettyTensor.md#fully_connected)
13731422

13741423
* `batch_normalize`:
13751424
* [conv2d](PrettyTensor.md#conv2d)
1425+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
13761426

13771427
* `l2loss`:
13781428
* [conv2d](PrettyTensor.md#conv2d)
1429+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
13791430
* [diagonal_matrix_mul](PrettyTensor.md#diagonal_matrix_mul)
13801431
* [fully_connected](PrettyTensor.md#fully_connected)
13811432

@@ -1393,6 +1444,7 @@ Creates a scope for the defaults that are used in a `with` block.
13931444

13941445
* `stddev`:
13951446
* [conv2d](PrettyTensor.md#conv2d)
1447+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
13961448
* [diagonal_matrix_mul](PrettyTensor.md#diagonal_matrix_mul)
13971449
* [fully_connected](PrettyTensor.md#fully_connected)
13981450
* [lstm_cell](PrettyTensor.md#lstm_cell)

‎docs/pretty_tensor_top_level.md‎

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,21 @@
44

55

66
[TOC]
7+
- - -
8+
9+
## BatchNormalizationArguments
10+
11+
BatchNormalizationArguments(learned_moments_update_rate, variance_epsilon, scale_after_normalization)
12+
- - -
13+
14+
### Properties
15+
16+
* count
17+
* index
18+
* learned_moments_update_rate
19+
* scale_after_normalization
20+
* variance_epsilon
21+
722
- - -
823
## Class Bookkeeper
924

@@ -439,13 +454,16 @@ Creates a scope for the defaults that are used in a `with` block.
439454

440455
* `activation_fn`:
441456
* [conv2d](PrettyTensor.md#conv2d)
457+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
442458
* [fully_connected](PrettyTensor.md#fully_connected)
443459

444460
* `batch_normalize`:
445461
* [conv2d](PrettyTensor.md#conv2d)
462+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
446463

447464
* `l2loss`:
448465
* [conv2d](PrettyTensor.md#conv2d)
466+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
449467
* [diagonal_matrix_mul](PrettyTensor.md#diagonal_matrix_mul)
450468
* [fully_connected](PrettyTensor.md#fully_connected)
451469

@@ -463,6 +481,7 @@ Creates a scope for the defaults that are used in a `with` block.
463481

464482
* `stddev`:
465483
* [conv2d](PrettyTensor.md#conv2d)
484+
* [depthwise_conv2d](PrettyTensor.md#depthwise_conv2d)
466485
* [diagonal_matrix_mul](PrettyTensor.md#diagonal_matrix_mul)
467486
* [fully_connected](PrettyTensor.md#fully_connected)
468487
* [lstm_cell](PrettyTensor.md#lstm_cell)

‎prettytensor/__init__.py‎

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,4 +55,5 @@
5555
from prettytensor.pretty_tensor_class import wrap
5656
from prettytensor.pretty_tensor_class import wrap_sequence
5757

58+
from prettytensor.pretty_tensor_normalization_methods import BatchNormalizationArguments
5859
from prettytensor.scopes import make_template

0 commit comments

Comments
 (0)