@@ -15,7 +15,7 @@ out. Spring Batch provides three key interfaces to help perform bulk reading and
15
15
`ItemReader`, `ItemProcessor`, and `ItemWriter`.
16
16
17
17
[[itemReader]]
18
- === ItemReader
18
+ === ` ItemReader`
19
19
20
20
Although a simple concept, an `ItemReader` is the means for providing data from many
21
21
different types of input. The most general examples include:
@@ -63,7 +63,7 @@ exception to be thrown. For example, a database `ItemReader` that is configured
63
63
query that returns 0 results returns `null` on the first invocation of read.
64
64
65
65
[[itemWriter]]
66
- === ItemWriter
66
+ === ` ItemWriter`
67
67
68
68
`ItemWriter` is similar in functionality to an `ItemReader` but with inverse operations.
69
69
Resources still need to be located, opened, and closed but they differ in that an
@@ -93,7 +93,7 @@ one for each item. The writer can then call `flush` on the hibernate session bef
93
93
returning.
94
94
95
95
[[itemProcessor]]
96
- === ItemProcessor
96
+ === ` ItemProcessor`
97
97
98
98
The `ItemReader` and `ItemWriter` interfaces are both very useful for their specific
99
99
tasks, but what if you want to insert business logic before writing? One option for both
@@ -358,7 +358,7 @@ the `ItemProcessor` and only updating the
358
358
instance that is the result.
359
359
360
360
[[itemStream]]
361
- === ItemStream
361
+ === ` ItemStream`
362
362
363
363
Both `ItemReaders` and `ItemWriters` serve their individual purposes well, but there is a
364
364
common concern among both of them that necessitates another interface. In general, as
@@ -480,7 +480,7 @@ Delimited files are those in which fields are separated by a delimiter, such as
480
480
Fixed Length files have fields that are a set length.
481
481
482
482
[[fieldSet]]
483
- ==== The FieldSet
483
+ ==== The ` FieldSet`
484
484
485
485
When working with flat files in Spring Batch, regardless of whether it is for input or
486
486
output, one of the most important classes is the `FieldSet`. Many architectures and
@@ -510,7 +510,7 @@ potentially unexpected ways, it can be consistent, both when handling errors cau
510
510
format exception, or when doing simple data conversions.
511
511
512
512
[[flatFileItemReader]]
513
- ==== FlatFileItemReader
513
+ ==== ` FlatFileItemReader`
514
514
515
515
A flat file is any type of file that contains at most two-dimensional (tabular) data.
516
516
Reading flat files in the Spring Batch framework is facilitated by the class called
@@ -560,7 +560,7 @@ the input resource does not exist. Otherwise, it logs the problem and continues.
560
560
|===============
561
561
562
562
[[lineMapper]]
563
- ===== LineMapper
563
+ ===== ` LineMapper`
564
564
565
565
As with `RowMapper`, which takes a low-level construct such as `ResultSet` and returns
566
566
an `Object`, flat file processing requires the same construct to convert a `String` line
@@ -585,7 +585,7 @@ gets you halfway there. The line must be tokenized into a `FieldSet`, which can
585
585
mapped to an object, as described later in this document.
586
586
587
587
[[lineTokenizer]]
588
- ===== LineTokenizer
588
+ ===== ` LineTokenizer`
589
589
590
590
An abstraction for turning a line of input into a `FieldSet` is necessary because there
591
591
can be many formats of flat file data that need to be converted to a `FieldSet`. In
@@ -614,7 +614,7 @@ width". The width of each field must be defined for each record type.
614
614
tokenizers should be used on a particular line by checking against a pattern.
615
615
616
616
[[fieldSetMapper]]
617
- ===== FieldSetMapper
617
+ ===== ` FieldSetMapper`
618
618
619
619
The `FieldSetMapper` interface defines a single method, `mapFieldSet`, which takes a
620
620
`FieldSet` object and maps its contents to an object. This object may be a custom DTO, a
@@ -634,7 +634,7 @@ public interface FieldSetMapper<T> {
634
634
The pattern used is the same as the `RowMapper` used by `JdbcTemplate`.
635
635
636
636
[[defaultLineMapper]]
637
- ===== DefaultLineMapper
637
+ ===== ` DefaultLineMapper`
638
638
639
639
Now that the basic interfaces for reading in flat files have been defined, it becomes
640
640
clear that three basic steps are required:
@@ -1039,7 +1039,7 @@ file. `FlatFileFormatException` is thrown by implementations of the `LineTokeniz
1039
1039
interface and indicates a more specific error encountered while tokenizing.
1040
1040
1041
1041
[[incorrectTokenCountException]]
1042
- ====== IncorrectTokenCountException
1042
+ ====== ` IncorrectTokenCountException`
1043
1043
1044
1044
Both `DelimitedLineTokenizer` and `FixedLengthLineTokenizer` have the ability to specify
1045
1045
column names that can be used for creating a `FieldSet`. However, if the number of column
@@ -1064,7 +1064,7 @@ Because the tokenizer was configured with 4 column names but only 3 tokens were
1064
1064
the file, an `IncorrectTokenCountException` was thrown.
1065
1065
1066
1066
[[incorrectLineLengthException]]
1067
- ====== IncorrectLineLengthException
1067
+ ====== ` IncorrectLineLengthException`
1068
1068
1069
1069
Files formatted in a fixed-length format have additional requirements when parsing
1070
1070
because, unlike a delimited format, each column must strictly adhere to its predefined
@@ -1110,14 +1110,14 @@ line lengths when tokenizing the line. A `FieldSet` is now correctly created and
1110
1110
returned. However, it contains only empty tokens for the remaining values.
1111
1111
1112
1112
[[flatFileItemWriter]]
1113
- ==== FlatFileItemWriter
1113
+ ==== ` FlatFileItemWriter`
1114
1114
1115
1115
Writing out to flat files has the same problems and issues that reading in from a file
1116
1116
must overcome. A step must be able to write either delimited or fixed length formats in a
1117
1117
transactional manner.
1118
1118
1119
1119
[[lineAggregator]]
1120
- ===== LineAggregator
1120
+ ===== ` LineAggregator`
1121
1121
1122
1122
Just as the `LineTokenizer` interface is necessary to take an item and turn it into a
1123
1123
`String`, file writing must have a way to aggregate multiple fields into a single string
@@ -1138,7 +1138,7 @@ The `LineAggregator` is the logical opposite of `LineTokenizer`. `LineTokenizer
1138
1138
`String`.
1139
1139
1140
1140
[[PassThroughLineAggregator]]
1141
- ====== PassThroughLineAggregator
1141
+ ====== ` PassThroughLineAggregator`
1142
1142
1143
1143
The most basic implementation of the `LineAggregator` interface is the
1144
1144
`PassThroughLineAggregator`, which assumes that the object is already a string or that
@@ -1205,7 +1205,7 @@ public FlatFileItemWriter itemWriter() {
1205
1205
----
1206
1206
1207
1207
[[FieldExtractor]]
1208
- ===== FieldExtractor
1208
+ ===== ` FieldExtractor`
1209
1209
1210
1210
The preceding example may be useful for the most basic uses of a writing to a file.
1211
1211
However, most users of the `FlatFileItemWriter` have a domain object that needs to be
@@ -1242,7 +1242,7 @@ of the provided object, which can then be written out with a delimiter between t
1242
1242
elements or as part of a fixed-width line.
1243
1243
1244
1244
[[PassThroughFieldExtractor]]
1245
- ====== PassThroughFieldExtractor
1245
+ ====== ` PassThroughFieldExtractor`
1246
1246
1247
1247
There are many cases where a collection, such as an array, `Collection`, or `FieldSet`,
1248
1248
needs to be written out. "Extracting" an array from one of these collection types is very
@@ -1252,7 +1252,7 @@ the object passed in is not a type of collection, then the `PassThroughFieldExtr
1252
1252
returns an array containing solely the item to be extracted.
1253
1253
1254
1254
[[BeanWrapperFieldExtractor]]
1255
- ====== BeanWrapperFieldExtractor
1255
+ ====== ` BeanWrapperFieldExtractor`
1256
1256
1257
1257
As with the `BeanWrapperFieldSetMapper` described in the file reading section, it is
1258
1258
often preferable to configure how to convert a domain object to an object array, rather
@@ -1474,7 +1474,7 @@ With an introduction to OXM and how one can use XML fragments to represent recor
1474
1474
can now more closely examine readers and writers.
1475
1475
1476
1476
[[StaxEventItemReader]]
1477
- ==== StaxEventItemReader
1477
+ ==== ` StaxEventItemReader`
1478
1478
1479
1479
The `StaxEventItemReader` configuration provides a typical setup for the processing of
1480
1480
records from an XML input stream. First, consider the following set of XML records that
@@ -1509,8 +1509,7 @@ To be able to process the XML records, the following is needed:
1509
1509
1510
1510
* Root Element Name: The name of the root element of the fragment that constitutes the
1511
1511
object to be mapped. The example configuration demonstrates this with the value of trade.
1512
- * Resource: A Spring Resource that represents the file to be
1513
- read.
1512
+ * Resource: A Spring Resource that represents the file to read.
1514
1513
* `Unmarshaller`: An unmarshalling facility provided by Spring OXM for mapping the XML
1515
1514
fragment to an object.
1516
1515
@@ -1631,7 +1630,7 @@ while (hasNext) {
1631
1630
----
1632
1631
1633
1632
[[StaxEventItemWriter]]
1634
- ==== StaxEventItemWriter
1633
+ ==== ` StaxEventItemWriter`
1635
1634
1636
1635
Output works symmetrically to input. The `StaxEventItemWriter` needs a `Resource`, a
1637
1636
marshaller, and a `rootTagName`. A Java object is passed to a marshaller (typically a
@@ -1748,6 +1747,64 @@ trade.setCustomer("Customer1");
1748
1747
staxItemWriter.write(trade);
1749
1748
----
1750
1749
1750
+ [[jsonReadingWriting]]
1751
+ === JSON Item Readers
1752
+
1753
+ Spring Batch provides support for reading JSON resources in the following format:
1754
+
1755
+ [source, json]
1756
+ ----
1757
+ [
1758
+ {
1759
+ "isin": "123",
1760
+ "quantity": 1,
1761
+ "price": 1.2,
1762
+ "customer": "foo"
1763
+ },
1764
+ {
1765
+ "isin": "456",
1766
+ "quantity": 2,
1767
+ "price": 1.4,
1768
+ "customer": "bar"
1769
+ }
1770
+ ]
1771
+ ----
1772
+
1773
+ It is assumed that the JSON resource is an array of JSON objects corresponding to
1774
+ individual items. Spring Batch is not tied to any particular JSON library.
1775
+
1776
+ [[JsonItemReader]]
1777
+ ==== `JsonItemReader`
1778
+
1779
+ The `JsonItemReader` delegates JSON parsing and binding to implementations of the
1780
+ `org.springframework.batch.item.json.JsonObjectReader` interface. This interface
1781
+ is intended to be implemented by using a streaming API to read JSON objects
1782
+ in chunks. Two implementations are currently provided:
1783
+
1784
+ * link:$$https://github.com/FasterXML/jackson$$[Jackson] through the `org.springframework.batch.item.json.JacksonJsonObjectReader`
1785
+ * link:$$https://github.com/google/gson$$[Gson] through the `org.springframework.batch.item.json.GsonJsonObjectReader`
1786
+
1787
+ To be able to process JSON records, the following is needed:
1788
+
1789
+ * `Resource`: A Spring Resource that represents the JSON file to read.
1790
+ * `JsonObjectReader`: A JSON object reader to parse and bind JSON objects to items
1791
+
1792
+ The following example shows how to define a `JsonItemReader` that works with the
1793
+ previous JSON resource `org/springframework/batch/item/json/trades.json` and a
1794
+ `JsonObjectReader` based on Jackson:
1795
+
1796
+ [source, java]
1797
+ ----
1798
+ @Bean
1799
+ public JsonItemReader<Trade> jsonItemReader() {
1800
+ return new JsonItemReaderBuilder<Trade>()
1801
+ .jsonObjectReader(new JacksonJsonObjectReader<>(Trade.class))
1802
+ .resource(new ClassPathResource("trades.json"))
1803
+ .name("tradeJsonItemReader")
1804
+ .build();
1805
+ }
1806
+ ----
1807
+
1751
1808
[[multiFileInput]]
1752
1809
=== Multi-File Input
1753
1810
@@ -1835,7 +1892,7 @@ which is the `Foo` with an ID of 3. The results of these reads are written out a
1835
1892
maintaining references to them).
1836
1893
1837
1894
[[JdbcCursorItemReader]]
1838
- ===== JdbcCursorItemReader
1895
+ ===== ` JdbcCursorItemReader`
1839
1896
1840
1897
`JdbcCursorItemReader` is the JDBC implementation of the cursor-based technique. It works
1841
1898
directly with a `ResultSet` and requires an SQL statement to run against a connection
@@ -2250,7 +2307,7 @@ fetches a portion of the results. We refer to this portion as a page. Each query
2250
2307
specify the starting row number and the number of rows that we want returned in the page.
2251
2308
2252
2309
[[JdbcPagingItemReader]]
2253
- ===== JdbcPagingItemReader
2310
+ ===== ` JdbcPagingItemReader`
2254
2311
2255
2312
One implementation of a paging `ItemReader` is the `JdbcPagingItemReader`. The
2256
2313
`JdbcPagingItemReader` needs a `PagingQueryProvider` responsible for providing the SQL
@@ -2339,7 +2396,7 @@ match the name of the named parameter. If you use a traditional '?' placeholder,
2339
2396
key for each entry should be the number of the placeholder, starting with 1.
2340
2397
2341
2398
[[JpaPagingItemReader]]
2342
- ===== JpaPagingItemReader
2399
+ ===== ` JpaPagingItemReader`
2343
2400
2344
2401
Another implementation of a paging `ItemReader` is the `JpaPagingItemReader`. JPA does
2345
2402
not have a concept similar to the Hibernate `StatelessSession`, so we have to use other
@@ -2645,7 +2702,7 @@ implementations. This section shows, by using a simple example, how to create a
2645
2702
writer restartable.
2646
2703
2647
2704
[[customReader]]
2648
- ==== Custom ItemReader Example
2705
+ ==== Custom ` ItemReader` Example
2649
2706
2650
2707
For the purpose of this example, we create a simple `ItemReader` implementation that
2651
2708
reads from a provided list. We start by implementing the most basic contract of
@@ -2691,7 +2748,7 @@ assertNull(itemReader.read());
2691
2748
----
2692
2749
2693
2750
[[restartableReader]]
2694
- ===== Making the ItemReader Restartable
2751
+ ===== Making the ` ItemReader` Restartable
2695
2752
2696
2753
The final challenge is to make the `ItemReader` restartable. Currently, if processing is
2697
2754
interrupted and begins again, the `ItemReader` must start at the beginning. This is
@@ -2780,7 +2837,7 @@ output), a more unique name is needed. For this reason, many of the Spring Batch
2780
2837
key name be overridden.
2781
2838
2782
2839
[[customWriter]]
2783
- ==== Custom ItemWriter Example
2840
+ ==== Custom ` ItemWriter` Example
2784
2841
2785
2842
Implementing a Custom `ItemWriter` is similar in many ways to the `ItemReader` example
2786
2843
above but differs in enough ways as to warrant its own example. However, adding
@@ -2805,7 +2862,7 @@ public class CustomItemWriter<T> implements ItemWriter<T> {
2805
2862
----
2806
2863
2807
2864
[[restartableWriter]]
2808
- ===== Making the ItemWriter Restartable
2865
+ ===== Making the ` ItemWriter` Restartable
2809
2866
2810
2867
To make the `ItemWriter` restartable, we would follow the same process as for the
2811
2868
`ItemReader`, adding and implementing the `ItemStream` interface to synchronize the
@@ -2876,7 +2933,7 @@ Batch provides a `ClassifierCompositeItemWriterBuilder` to construct an instance
2876
2933
`ClassifierCompositeItemWriter`.
2877
2934
2878
2935
[[classifierCompositeItemProcessor]]
2879
- ===== ClassifierCompositeItemProcessor
2936
+ ===== ` ClassifierCompositeItemProcessor`
2880
2937
The `ClassifierCompositeItemProcessor` is an `ItemProcessor` that calls one of a
2881
2938
collection of `ItemProcessor` implementations, based on a router pattern implemented
2882
2939
through the provided `Classifier`. Spring Batch provides a
0 commit comments