@@ -261,7 +261,7 @@ Handle to this layer.
261261
262262- - -
263263
264- ## <a name =" batch_normalize " ></a >batch_normalize(name=None, learned_moments_update_rate=None , variance_epsilon=None , scale_after_normalization=None , phase=Phase.train)
264+ ## <a name =" batch_normalize " ></a >batch_normalize(name=None, learned_moments_update_rate=0.0003 , variance_epsilon=0.001 , scale_after_normalization=False , phase=Phase.train)
265265
266266
267267
@@ -378,7 +378,7 @@ A new PrettyTensor.
378378
379379- - -
380380
381- ## <a name =" conv2d " ></a >conv2d(kernel, depth, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x394f9b0 >, edges=SAME, batch_normalize=False, name=None)
381+ ## <a name =" conv2d " ></a >conv2d(kernel, depth, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0 >, edges=SAME, batch_normalize=False, name=None)
382382
383383
384384
@@ -406,7 +406,8 @@ The current head must be a rank 4 Tensor.
406406* bias_init: An initializer for the bias or a Tensor.
407407* edges: Either SAME to use 0s for the out of bounds area or VALID to shrink
408408 the output size and only uses valid input pixels.
409- * batch_normalize: Set to True to batch_normalize this layer.
409+ * batch_normalize: Supply a BatchNormalizationArguments to set the
410+ parameters for batch normalization.
410411* name: The name for this operation is also used to create/find the
411412 parameter variables.
412413
@@ -450,6 +451,53 @@ A loss.
450451* ValueError: if labels is None or the type is not float or double.
451452
452453
454+ - - -
455+
456+ ## <a name =" depthwise_conv2d " ></a >depthwise_conv2d(kernel, channel_multiplier, activation_fn=None, stride=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0>, edges=SAME, batch_normalize=False, name=None)
457+
458+
459+
460+ Adds a depth-wise convolution to the stack of operations.
461+
462+ The current head must be a rank 4 Tensor.
463+
464+ #### Args:
465+
466+
467+ * kernel: The size of the patch for the pool, either an int or a length 1 or
468+ 2 sequence (if length 1 or int, it is expanded).
469+ * channel_multiplier: Output channels will be a multiple of input channels.
470+ * activation_fn: A tuple of (activation_function, extra_parameters). Any
471+ function that takes a tensor as its first argument can be used. More
472+ common functions will have summaries added (e.g. relu).
473+ * stride: The strides as a length 1, 2 or 4 sequence or an integer. If an
474+ int, length 1 or 2, the stride in the first and last dimensions are 1.
475+ * l2loss: Set to a value greater than 0 to use L2 regularization to decay
476+ the weights.
477+ * init: An optional initialization. If not specified, uses Xavier
478+ initialization.
479+ * stddev: A standard deviation to use in parameter initialization.
480+ * bias: Set to False to not have a bias.
481+ * bias_init: An initializer for the bias or a Tensor.
482+ * edges: Either SAME to use 0s for the out of bounds area or VALID to shrink
483+ the output size and only uses valid input pixels.
484+ * batch_normalize: Supply a BatchNormalizationArguments to set the
485+ parameters for batch normalization.
486+ * name: The name for this operation is also used to create/find the
487+ parameter variables.
488+
489+ #### Returns:
490+
491+ Handle to the generated layer.
492+
493+
494+ #### Raises:
495+
496+
497+ * ValueError: If head is not a rank 4 tensor or the depth of the input
498+ (4th dim) is not known.
499+
500+
453501- - -
454502
455503## <a name =" diagonal_matrix_mul " ></a >diagonal_matrix_mul(init=None, stddev=None, l2loss=None)
@@ -648,7 +696,7 @@ A LayerWrapper with the flattened tensor.
648696
649697- - -
650698
651- ## <a name =" fully_connected " ></a >fully_connected(size, activation_fn=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x394f9b0 >, transpose_weights=False, name=None)
699+ ## <a name =" fully_connected " ></a >fully_connected(size, activation_fn=None, l2loss=None, init=None, stddev=None, bias=True, bias_init=<function zeros_initializer at 0x251986e0 >, transpose_weights=False, name=None)
652700
653701
654702
@@ -1098,7 +1146,7 @@ Computes the softmax.
10981146
10991147- - -
11001148
1101- ## <a name =" softmax_classifier " ></a >softmax_classifier(class_count, labels=None, name=None, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x394f9b0 >)
1149+ ## <a name =" softmax_classifier " ></a >softmax_classifier(class_count, labels=None, name=None, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x251986e0 >)
11021150
11031151
11041152
@@ -1129,7 +1177,7 @@ A tuple of the softmax's name and the loss tensor's name in m.bits.
11291177
11301178- - -
11311179
1132- ## <a name =" softmax_classifier_with_sampled_loss " ></a >softmax_classifier_with_sampled_loss(num_classes, labels, num_sampled, num_true=None, sampled_values=None, remove_accidental_hits=True, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x394f9b0 >, name=softmax_classifier)
1180+ ## <a name =" softmax_classifier_with_sampled_loss " ></a >softmax_classifier_with_sampled_loss(num_classes, labels, num_sampled, num_true=None, sampled_values=None, remove_accidental_hits=True, loss_weight=None, per_example_weights=None, weight_init=None, bias_init=<function zeros_initializer at 0x251986e0 >, name=softmax_classifier)
11331181
11341182
11351183
@@ -1369,13 +1417,16 @@ Creates a scope for the defaults that are used in a `with` block.
13691417
13701418* ` activation_fn ` :
13711419 * [ conv2d] ( PrettyTensor.md#conv2d )
1420+ * [ depthwise_conv2d] ( PrettyTensor.md#depthwise_conv2d )
13721421 * [ fully_connected] ( PrettyTensor.md#fully_connected )
13731422
13741423* ` batch_normalize ` :
13751424 * [ conv2d] ( PrettyTensor.md#conv2d )
1425+ * [ depthwise_conv2d] ( PrettyTensor.md#depthwise_conv2d )
13761426
13771427* ` l2loss ` :
13781428 * [ conv2d] ( PrettyTensor.md#conv2d )
1429+ * [ depthwise_conv2d] ( PrettyTensor.md#depthwise_conv2d )
13791430 * [ diagonal_matrix_mul] ( PrettyTensor.md#diagonal_matrix_mul )
13801431 * [ fully_connected] ( PrettyTensor.md#fully_connected )
13811432
@@ -1393,6 +1444,7 @@ Creates a scope for the defaults that are used in a `with` block.
13931444
13941445* ` stddev ` :
13951446 * [ conv2d] ( PrettyTensor.md#conv2d )
1447+ * [ depthwise_conv2d] ( PrettyTensor.md#depthwise_conv2d )
13961448 * [ diagonal_matrix_mul] ( PrettyTensor.md#diagonal_matrix_mul )
13971449 * [ fully_connected] ( PrettyTensor.md#fully_connected )
13981450 * [ lstm_cell] ( PrettyTensor.md#lstm_cell )
0 commit comments