Skip to content

Fix Javadoc errors (#152) #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,18 @@
</dependencyManagement>

<profiles>
<!--
Developer profile
Enable javadoc generation so the developer is aware of any mistake that might prevent
ultimately the deployment of the artifacts
-->
<profile>
<id>dev</id>
<properties>
<maven.javadoc.skip>false</maven.javadoc.skip>
</properties>
</profile>

<!--
Deploying profile
Build the Javadoc when deploying
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,8 +155,8 @@ public CategoricalCrossentropy(Ops tf, String name, boolean fromLogits) {
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2<code> means that we will use a
* value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2</code> means that we will use a
* value of <code>0.1</code> for label <code>0</code> and <code>0.9</code> for label <code>1</code>
*/
public CategoricalCrossentropy(Ops tf, boolean fromLogits, float labelSmoothing) {
this(tf, null, fromLogits, labelSmoothing, REDUCTION_DEFAULT, DEFAULT_AXIS);
Expand All @@ -170,8 +170,8 @@ public CategoricalCrossentropy(Ops tf, boolean fromLogits, float labelSmoothing)
* @param name the name of this loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2<code> means that we will use a
* value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2</code> means that we will use a
* value of <code>0.1</code> for label <code>0</code> and <code>0.9</code> for label <code>1</code>
*/
public CategoricalCrossentropy(Ops tf, String name, boolean fromLogits, float labelSmoothing) {
this(tf, name, fromLogits, labelSmoothing, REDUCTION_DEFAULT, DEFAULT_AXIS);
Expand All @@ -184,8 +184,8 @@ public CategoricalCrossentropy(Ops tf, String name, boolean fromLogits, float la
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
* confidence on label values are relaxed. e.g. <code>x=0.2<code> means that we will use a
* value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>
* confidence on label values are relaxed. e.g. <code>x=0.2</code> means that we will use a
* value of <code>0.1</code> for label <code>0</code> and <code>0.9</code> for label <code>1</code>
* @param reduction Type of Reduction to apply to loss.
*/
public CategoricalCrossentropy(
Expand All @@ -200,8 +200,8 @@ public CategoricalCrossentropy(
* @param name the name of this loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2<code> means that we will use a
* value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2</code> means that we will use a
* value of <code>0.1</code> for label <code>0</code> and <code>0.9</code> for label <code>1</code>
* @param reduction Type of Reduction to apply to loss.
* @param axis The channels axis. <code>axis=-1</code> corresponds to data format `Channels Last'
* and <code>axis=1</code> corresponds to data format 'Channels First'.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
*
* <p><code>loss = maximum(1 - labels * predictions, 0)</code></p>.
*
* <p><code>labels/code> values are expected to be -1 or 1.
* <p><code>labels</code> values are expected to be -1 or 1.
* If binary (0 or 1) labels are provided, they will be converted to -1 or 1.</p>
*
* <p>Standalone usage:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,8 +218,8 @@ private static <T extends TNumber> Operand<T> binaryCrossentropyHelper(
* @param predictions the predictions
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2<code> means that we will use a
* value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>
* confidence on label values are relaxed. e.g. <code>labelSmoothing=0.2</code> means that we will use a
* value of <code>0.1</code> for label <code>0</code> and <code>0.9</code> for label <code>1</code>
* @param axis the
* @param <T> the data type of the predictions and labels
* @return the categorical crossentropy loss.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ public class LossesHelper {
*
* <ol type="1">
* <li>Squeezes last dim of <code>predictions</code> or <code>labels</code> if their rank
* differs by 1 (using {@link #removeSqueezableDimensions}).
* differs by 1 (using {@link #removeSqueezableDimensions}).</li>
* <li>Squeezes or expands last dim of <code>sampleWeight</code> if its rank differs by 1 from
* the new rank of <code>predictions</code>. If <code>sampleWeight</code> is scalar, it is
* kept scalar./li>
* kept scalar.</li>
* </ol>
*
* @param tf the TensorFlow Ops
Expand Down Expand Up @@ -80,7 +80,7 @@ public static <T extends TNumber> LossTuple<T> squeezeOrExpandDimensions(
* </code>.
* @param sampleWeights Optional sample weight(s) <code>Operand</code> whose dimensions match<code>
* prediction</code>.
* @return LossTuple of <code>prediction<s/code>, <code>labels</code> and <code>sampleWeight</code>.
* @return LossTuple of <code>predictions</code>, <code>labels</code> and <code>sampleWeight</code>.
* Each of them possibly has the last dimension squeezed, <code>sampleWeight</code> could be
* extended by one dimension. If <code>sampleWeight</code> is null, only the possibly shape modified <code>predictions</code> and <code>labels</code> are
* returned.
Expand Down Expand Up @@ -290,7 +290,7 @@ private static <T extends TNumber> Operand<T> reduceWeightedLoss(
* Computes a safe mean of the losses.
*
* @param tf the TensorFlow Ops
* @param losses </code>Operand</code> whose elements contain individual loss measurements.
* @param losses <code>Operand</code> whose elements contain individual loss measurements.
* @param numElements The number of measurable elements in <code>losses</code>.
* @param <T> the data type of the losses
* @return A scalar representing the mean of <code>losses</code>. If <code>numElements</code> is
Expand Down