Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit c4e211b

Browse files
author
Bruno Volpato
authored
Fix a bunch of no-op typos (#29294)
1 parent c1b83d2 commit c4e211b

47 files changed

Lines changed: 79 additions & 79 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

examples/java/src/main/java/org/apache/beam/examples/complete/AutoComplete.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ public PCollection<KV<String, List<CompletionCandidate>>> expand(PCollection<Str
125125
// First count how often each token appears.
126126
.apply(Count.perElement())
127127

128-
// Map the KV outputs of Count into our own CompletionCandiate class.
128+
// Map the KV outputs of Count into our own CompletionCandidate class.
129129
.apply(
130130
"CreateCompletionCandidates",
131131
ParDo.of(
@@ -168,7 +168,7 @@ public PCollection<KV<String, List<CompletionCandidate>>> expand(
168168
// For each completion candidate, map it to all prefixes.
169169
.apply(ParDo.of(new AllPrefixes(minPrefix)))
170170

171-
// Find and return the top candiates for each prefix.
171+
// Find and return the top candidates for each prefix.
172172
.apply(
173173
Top.<String, CompletionCandidate>largestPerKey(candidatesPerPrefix)
174174
.withHotKeyFanout(new HotKeyFanout()));
@@ -227,7 +227,7 @@ public PCollectionList<KV<String, List<CompletionCandidate>>> expand(
227227
.apply(Partition.of(2, new KeySizePartitionFn()));
228228
} else {
229229
// If a candidate is in the top N for prefix a...b, it must also be in the top
230-
// N for a...bX for every X, which is typlically a much smaller set to consider.
230+
// N for a...bX for every X, which is typically a much smaller set to consider.
231231
// First, compute the top candidate for prefixes of size at least minPrefix + 1.
232232
PCollectionList<KV<String, List<CompletionCandidate>>> larger =
233233
input.apply(new ComputeTopRecursive(candidatesPerPrefix, minPrefix + 1));

examples/notebooks/beam-ml/image_processing_tensorflow.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
{
5151
"cell_type": "markdown",
5252
"source": [
53-
"Image Processing is a machine learning technique to read, analyze and extract meaningful information from images. It involves multiple steps such as applying various preprocessing fuctions, getting predictions from a model, storing the predictions in a useful format, etc. Apache Beam is a suitable tool to handle these tasks and build a structured workflow. This notebook demonstrates the use of Apache Beam in image processing and performs the following:\n",
53+
"Image Processing is a machine learning technique to read, analyze and extract meaningful information from images. It involves multiple steps such as applying various preprocessing functions, getting predictions from a model, storing the predictions in a useful format, etc. Apache Beam is a suitable tool to handle these tasks and build a structured workflow. This notebook demonstrates the use of Apache Beam in image processing and performs the following:\n",
5454
"* Import and preprocess the CIFAR-10 dataset\n",
5555
"* Train a TensorFlow model to classify images\n",
5656
"* Store the model in Google Cloud and create a model handler\n",

examples/notebooks/beam-ml/nlp_tensorflow_streaming.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@
5252
"id": "m0xDYq_X-M18"
5353
},
5454
"source": [
55-
"Natural Language Processing or NLP is a field of Artifical Intelligence that enables computers to interpret and understand human language. It involves multiple steps such as applying various preprocessing fuctions, getting predictions from a model, storing the predictions in a useful format, etc.\n",
55+
"Natural Language Processing or NLP is a field of Artifical Intelligence that enables computers to interpret and understand human language. It involves multiple steps such as applying various preprocessing functions, getting predictions from a model, storing the predictions in a useful format, etc.\n",
5656
"Sentiment Analysis is a popular use case of NLP, which allows computers to analyze the sentiment of a text. This notebook demonstrates the use of streaming pipelines in NLP.\n",
5757
"* Extracts comments using [Youtube API](https://developers.google.com/youtube/v3) and publishing them to Pub/Sub\n",
5858
"* Trains a TensorFlow model to predict the sentiment of text\n",

playground/backend/internal/environment/environment_service.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ func createExecutorConfig(apacheBeamSdk pb.Sdk, configPath string) (*ExecutorCon
223223
case pb.Sdk_SDK_JAVA:
224224
args, err := ConcatBeamJarsToString()
225225
if err != nil {
226-
return nil, fmt.Errorf("error during proccessing jars: %s", err.Error())
226+
return nil, fmt.Errorf("error during processing jars: %s", err.Error())
227227
}
228228
executorConfig.CompileArgs = append(executorConfig.CompileArgs, args)
229229
executorConfig.RunArgs[1] = fmt.Sprintf("%s%s", executorConfig.RunArgs[1], args)

runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/stableinput/BufferingDoFnRunner.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ public static <InputT, OutputT> BufferingDoFnRunner<InputT, OutputT> create(
131131
*/
132132
private final @Nullable Supplier<Locker> locker;
133133
/**
134-
* A selector of key. When non-null, this must be set to the keyed state beckend before buffering.
134+
* A selector of key. When non-null, this must be set to the keyed state backend before buffering.
135135
*/
136136
private final @Nullable Function<InputT, Object> keySelector;
137137
/** Callable to notify about possibility to flush bundle. */

runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowRunner.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1744,7 +1744,7 @@ public void leaveCompositeTransform(TransformHierarchy.Node node) {}
17441744
}
17451745

17461746
/**
1747-
* Returns true if the passed in {@link PCollection} needs to be materialiazed using an indexed
1747+
* Returns true if the passed in {@link PCollection} needs to be materialized using an indexed
17481748
* format.
17491749
*/
17501750
boolean doesPCollectionRequireIndexedFormat(PCollection<?> pcol) {

runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/options/DataflowPipelineOptions.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ public interface DataflowPipelineOptions
174174
@Description("The customized dataflow worker jar")
175175
String getDataflowWorkerJar();
176176

177-
void setDataflowWorkerJar(String dataflowWorkerJafr);
177+
void setDataflowWorkerJar(String dataflowWorkerJar);
178178

179179
/** Set of available Flexible Resource Scheduling goals. */
180180
enum FlexResourceSchedulingGoal {

runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/util/RandomAccessData.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ public int commonPrefixLength(RandomAccessData o1, RandomAccessData o2) {
200200
* returned.
201201
*
202202
* <p>The {@link UnsignedLexicographicalComparator} supports comparing {@link RandomAccessData}
203-
* with support for positive infinitiy.
203+
* with support for positive infinity.
204204
*/
205205
public RandomAccessData increment() throws IOException {
206206
RandomAccessData copy = copy();
@@ -271,7 +271,7 @@ public void write(byte[] b, int offset, int length) throws IOException {
271271

272272
/**
273273
* Returns an output stream which writes to the backing buffer from the current position. Note
274-
* that the internal buffer will grow as required to accomodate all data written.
274+
* that the internal buffer will grow as required to accommodate all data written.
275275
*/
276276
public OutputStream asOutputStream() {
277277
return outputStream;
@@ -350,7 +350,7 @@ private void ensureCapacity(int minCapacity) {
350350
return;
351351
}
352352

353-
// Try to double the size of the buffer, if thats not enough, just use the new capacity.
353+
// Try to double the size of the buffer, if that's not enough, just use the new capacity.
354354
// Note that we use Math.min(long, long) to not cause overflow on the multiplication.
355355
int newCapacity = (int) Math.min(Integer.MAX_VALUE - 8, buffer.length * 2L);
356356
if (newCapacity < minCapacity) {

runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/util/TimeUtil.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ private TimeUtil() {} // Non-instantiable.
5050
private static final Pattern TIME_PATTERN =
5151
Pattern.compile("(\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2})(?:\\.(\\d+))?Z");
5252

53-
/** Converts a {@link ReadableInstant} into a Dateflow API time value. */
53+
/** Converts a {@link ReadableInstant} into a Dataflow API time value. */
5454
public static String toCloudTime(ReadableInstant instant) {
5555
// Note that since Joda objects use millisecond resolution, we always
5656
// produce either no fractional seconds or fractional seconds with

runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/DataflowExecutionStateRegistry.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ public DataflowOperationContext.DataflowExecutionState getState(
6060
* Get an existing state or create a {@link DataflowOperationContext.DataflowExecutionState} that
6161
* represents the consumption of some kind of IO, such as reading of Side Input, or Shuffle data.
6262
*
63-
* <p>An IO-related ExcecutionState may represent: * A Side Input collection as declaringStep +
63+
* <p>An IO-related ExecutionState may represent: * A Side Input collection as declaringStep +
6464
* inputIndex. The consumption of the side input is represented by (declaringStep, inputIndex,
6565
* requestingStepName), where requestingStepName is the step that causes the IO to occur. * A
6666
* Shuffle IO as the GBK step for that shuffle. The consumption of the side input is represented

0 commit comments

Comments
 (0)