diff --git a/content/community/projects.md b/content/community/projects.md
index 7d459803ac..8f25984c45 100644
--- a/content/community/projects.md
+++ b/content/community/projects.md
@@ -66,7 +66,7 @@ Here are the **most active** contributions from the community.
* [Nagios Plugin](https://github.com/basho-labs/riak_nagios) maintained by the Basho community
* [Advanced Nagios Plugins Collection](https://github.com/harisekhon/nagios-plugins) contains many additional Nagios plugins for monitoring Riak
* [New Relic Plugin](https://github.com/basho/riak_newrelic) serves node statistics of a Riak Node to the New Relic APM System
-* [Yokozuna Monitor](https://github.com/basho-labs/ruby-yz-monitor) is a ruby application to monitor your Riak Search activity.
+* [Yokozuna Monitor](https://github.com/basho-labs/ruby-yz-monitor) is a ruby application to monitor your Riak search activity.
* [riak-statsd in golang](https://github.com/jjmalina/riak-statsd) which monitors Riak KV and pushes to statsd
* [Gmond Python Modules for Riak](https://github.com/ganglia/gmond_python_modules) is a Ganglia Module for connecting to Riak KV
* [Riak Key List Utility](https://github.com/basho-labs/riak-key-list-util) is a console utility script for per-vnode key counting, siblings logging and more
@@ -125,14 +125,14 @@ Some projects have lost its maintainer with time. Here are all projects that hav
* [Sample HA Proxy Configuration for Protocol Buffers Interface](http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-May/004388.html) (courtesy of Bob Feldbauer)
* [Storing Apache Logs in Riak via Fluentd](http://docs.fluentd.org/articles/apache-to-riak)
* [yakriak](http://github.com/seancribbs/yakriak) --- Riak-powered Ajax-polling chatroom
-* [riaktant](https://github.com/basho/riaktant) --- A full-blown NodejS app that stores and makes syslog messages searchable in Riak Search
+* [riaktant](https://github.com/basho/riaktant) --- A full-blown NodejS app that stores and makes syslog messages searchable in Riak search
* [selusuh](https://github.com/OJ/selusuh) --- Riak application that presents JSON slide decks (thanks, [OJ](http://twitter.com/thecolonial)!)
* [Rekon](https://github.com/adamhunter/rekon) --- A Riak data browser, built as a totally self-contained Riak application
* [Slideblast](https://github.com/rustyio/SlideBlast) --- Share and control slide presentation for the web
* [riak_php_app](http://github.com/schofield/riak_php_app) --- A small PHP app that shows some basic usage of the Riak PHP library
* [riak-url-shortener](http://github.com/seancribbs/riak-url-shortener) --- A small Ruby app (with Sinatra) that creates short URLs and stores them in Riak
* [wriaki](https://github.com/basho-labs/wriaki) --- A wiki app backed by Riak
-* [riagi](https://github.com/basho-labs/riagi) --- A simple imgur.com clone built using Riak, Django, and Riak Search
+* [riagi](https://github.com/basho-labs/riagi) --- A simple imgur.com clone built using Riak, Django, and Riak search
* [riak-session-manager](https://github.com/jbrisbin/riak-session-manager) --- A Riak-backed Tomcat Session Manager
* [riak_id](https://github.com/seancribbs/riak_id) --- A clone of Twitter's Snowflake, built on riak_core
* [riak_zab](https://github.com/jtuple/riak_zab) --- An implementation of the Zookeeper protocol on top of Riak Core
diff --git a/content/community/reporting-bugs.md b/content/community/reporting-bugs.md
index cea4576d23..72be2e2edd 100644
--- a/content/community/reporting-bugs.md
+++ b/content/community/reporting-bugs.md
@@ -34,7 +34,7 @@ filing, please attempt do the following:
* [Riak issues](https://github.com/basho/riak/issues)
* [Riak Core issues](https://github.com/basho/riak_core/issues)
* [Riak KV issues](https://github.com/basho/riak_kv/issues)
- * [Riak Search issues](https://github.com/basho/riak_search/issues)
+ * [Riak search issues](https://github.com/basho/riak_search/issues)
* [Bitcask issues](https://github.com/basho/bitcask/issues)
* [eLevelDB issues](https://github.com/basho/eleveldb/issues)
* Search the [Riak Mailing List Archives](http://riak.markmail.org/) for
diff --git a/content/riak/kv/2.0.0/configuring/reference.md b/content/riak/kv/2.0.0/configuring/reference.md
index aa1cb175de..d862d393e9 100644
--- a/content/riak/kv/2.0.0/configuring/reference.md
+++ b/content/riak/kv/2.0.0/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.0/configuring/search.md b/content/riak/kv/2.0.0/configuring/search.md
index cf60c18cae..a1abe3a4ec 100644
--- a/content/riak/kv/2.0.0/configuring/search.md
+++ b/content/riak/kv/2.0.0/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.0/configuring/strong-consistency.md b/content/riak/kv/2.0.0/configuring/strong-consistency.md
index b580d86178..76133b1049 100644
--- a/content/riak/kv/2.0.0/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.0/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.0/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.0/developing/api/http/delete-search-index.md
index dbe4cb0d81..5ef180825b 100644
--- a/content/riak/kv/2.0.0/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.0/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.0/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.0/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.0/developing/api/http/fetch-search-index.md
index 8d9164bf68..37a30caecb 100644
--- a/content/riak/kv/2.0.0/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.0/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.0/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.0/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.0/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.0/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.0/developing/api/http/fetch-search-schema.md
index 9464ac731b..85f491952b 100644
--- a/content/riak/kv/2.0.0/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.0/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.0/developing/api/http/search-index-info.md b/content/riak/kv/2.0.0/developing/api/http/search-index-info.md
index 0c534a42a7..ec2fde1c8d 100644
--- a/content/riak/kv/2.0.0/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.0/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.0/developing/api/http/store-search-index.md b/content/riak/kv/2.0.0/developing/api/http/store-search-index.md
index 0587e3aacd..f2a61e3d06 100644
--- a/content/riak/kv/2.0.0/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.0/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.0/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.0/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.0/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.0/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.0/developing/api/http/store-search-schema.md
index 7a11c6785e..ab4eac7fc5 100644
--- a/content/riak/kv/2.0.0/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.0/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-index-get.md
index 03f45e7057..fabd50f5af 100644
--- a/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.0/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-schema-get.md
index 5e97360d4c..64472a511e 100644
--- a/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.0/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.0/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.0/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.0/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.0/developing/app-guide.md b/content/riak/kv/2.0.0/developing/app-guide.md
index ce4e9c024a..bf5191010c 100644
--- a/content/riak/kv/2.0.0/developing/app-guide.md
+++ b/content/riak/kv/2.0.0/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.0/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.0/developing/app-guide/advanced-mapreduce.md
index ee2a1b89fc..7e2a36d209 100644
--- a/content/riak/kv/2.0.0/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.0/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.0/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.0/developing/app-guide/strong-consistency.md
index 684d0a8518..cc9ea7656e 100644
--- a/content/riak/kv/2.0.0/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.0/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.0/developing/data-modeling.md b/content/riak/kv/2.0.0/developing/data-modeling.md
index 18dc36566a..5b18df2841 100644
--- a/content/riak/kv/2.0.0/developing/data-modeling.md
+++ b/content/riak/kv/2.0.0/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.0/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.0/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.0/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.0/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.0/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.0/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.0/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.0/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.0/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.0/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.0/developing/data-types.md b/content/riak/kv/2.0.0/developing/data-types.md
index 67cc1c2c16..679092ac28 100644
--- a/content/riak/kv/2.0.0/developing/data-types.md
+++ b/content/riak/kv/2.0.0/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.0/developing/usage.md b/content/riak/kv/2.0.0/developing/usage.md
index 30d5d53077..320037844f 100644
--- a/content/riak/kv/2.0.0/developing/usage.md
+++ b/content/riak/kv/2.0.0/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.0/developing/usage/custom-extractors.md b/content/riak/kv/2.0.0/developing/usage/custom-extractors.md
index 97936b2ea8..371c6ca10e 100644
--- a/content/riak/kv/2.0.0/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.0/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.0/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.0/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.0/developing/usage/document-store.md b/content/riak/kv/2.0.0/developing/usage/document-store.md
index 65681cb64c..ae26ae9248 100644
--- a/content/riak/kv/2.0.0/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.0/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.0/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.0/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.0/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.0/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.0/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.0/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.0/developing/usage/search-schemas.md b/content/riak/kv/2.0.0/developing/usage/search-schemas.md
index ca22f14adf..bce719e421 100644
--- a/content/riak/kv/2.0.0/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.0/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.0/developing/data-types/), and [more](/riak/kv/2.0.0/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.0/developing/usage/search.md b/content/riak/kv/2.0.0/developing/usage/search.md
index 6f141b188d..d84195a1a1 100644
--- a/content/riak/kv/2.0.0/developing/usage/search.md
+++ b/content/riak/kv/2.0.0/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.0/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.0/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.0/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.0/developing/usage/secondary-indexes.md
index 1ddecc5ad9..106858f7ae 100644
--- a/content/riak/kv/2.0.0/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.0/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.0/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.0/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.0/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.0/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.0/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.0/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.0/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.0/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.0/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.0/introduction.md b/content/riak/kv/2.0.0/introduction.md
index 5fd99a3740..1987b2e43c 100644
--- a/content/riak/kv/2.0.0/introduction.md
+++ b/content/riak/kv/2.0.0/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.0/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.0/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.0/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.0/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.0/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.0/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.0/learn/concepts/strong-consistency.md
index 0bd9eedda2..1199cdb3cc 100644
--- a/content/riak/kv/2.0.0/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.0/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.0/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.0/learn/glossary.md b/content/riak/kv/2.0.0/learn/glossary.md
index 16e71bb1bd..63cd6605aa 100644
--- a/content/riak/kv/2.0.0/learn/glossary.md
+++ b/content/riak/kv/2.0.0/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.0/learn/use-cases.md b/content/riak/kv/2.0.0/learn/use-cases.md
index 9ef8456458..fae7651e9c 100644
--- a/content/riak/kv/2.0.0/learn/use-cases.md
+++ b/content/riak/kv/2.0.0/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.0/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.0/release-notes.md b/content/riak/kv/2.0.0/release-notes.md
index 642c1490b9..436377a7ea 100644
--- a/content/riak/kv/2.0.0/release-notes.md
+++ b/content/riak/kv/2.0.0/release-notes.md
@@ -154,7 +154,7 @@ document.
### Search 2 (Yokozuna)
-The brand new and completely re-architected Riak Search, codenamed
+The brand new and completely re-architected Riak search, codenamed
Yokozuna, [kept its own release
notes](https://github.com/basho/yokozuna/blob/develop/docs/RELEASE_NOTES.md)
while it was being developed. Please read there for the most relevant
@@ -311,8 +311,8 @@ be found in the **Termination Notices** section below.
* JavaScript MapReduce is deprecated; we have expanded our
[Erlang MapReduce](http://docs.basho.com/riak/2.0.0/dev/advanced/mapreduce/)
documentation to assist with the transition.
-* Riak Search 1.0 is being phased out in favor of the new Solr-based
- [Riak Search 2.0](http://docs.basho.com/riak/2.0.0/dev/advanced/search/).
+* Riak search 1.0 is being phased out in favor of the new Solr-based
+ [Riak search 2.0](http://docs.basho.com/riak/2.0.0/dev/advanced/search/).
Version 1.0 will not work if security is enabled.
* v2 replication (a component of Riak Enterprise) has been superseded
by v3 and will be removed in the future.
@@ -394,7 +394,7 @@ list below.
* [**riak_auth_mods** - Security authentication modules for Riak](https://github.com/basho/riak_auth_mods)
* [**riak_dt** - Convergent replicated datatypes (CRDTs) in Erlang](https://github.com/basho/riak_dt)
* [**riak_ensemble** - Multi-Paxos framework in Erlang](https://github.com/basho/riak_ensemble)
-* [**Yokozuna** - Riak Search 2, Riak + Solr](https://github.com/basho/yokozuna)
+* [**Yokozuna** - Riak search 2, Riak + Solr](https://github.com/basho/yokozuna)
#### Merged PRs
@@ -1063,7 +1063,7 @@ list below.
* riak_search/154: [Fix search tests](https://github.com/basho/riak_search/pull/154)
* riak_search/156: [Don't start riak_search is security is enabled](https://github.com/basho/riak_search/pull/156)
* riak_search/158: [Allow search to start if security is enabled, just disable its APIs](https://github.com/basho/riak_search/pull/158)
-* riak_search/160: [Add deprecation notice on Riak Search startup](https://github.com/basho/riak_search/pull/160)
+* riak_search/160: [Add deprecation notice on Riak search startup](https://github.com/basho/riak_search/pull/160)
* riak_snmp/10: [fix unit tests for Erlang R16B01](https://github.com/basho/riak_snmp/pull/10)
* riak_snmp/11: [look for mib_dir in riak_snmp not riak](https://github.com/basho/riak_snmp/pull/11)
* riak_snmp/12: [fix "make clean" and .gitignore](https://github.com/basho/riak_snmp/pull/12)
diff --git a/content/riak/kv/2.0.0/setup/installing/source/jvm.md b/content/riak/kv/2.0.0/setup/installing/source/jvm.md
index b74cda253d..1157ff8709 100644
--- a/content/riak/kv/2.0.0/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.0/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.0/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.0/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.0/setup/planning/backend/bitcask.md
index 7e5cb7de0a..6917395ed8 100644
--- a/content/riak/kv/2.0.0/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.0/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.0/setup/upgrading/checklist.md b/content/riak/kv/2.0.0/setup/upgrading/checklist.md
index 225b4b64d2..384f8feb98 100644
--- a/content/riak/kv/2.0.0/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.0/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.0/setup/upgrading/search.md b/content/riak/kv/2.0.0/setup/upgrading/search.md
index c53896c7b2..4fc23e5e5d 100644
--- a/content/riak/kv/2.0.0/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.0/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.0/setup/upgrading/version.md b/content/riak/kv/2.0.0/setup/upgrading/version.md
index fdc2abe348..ac2b44f375 100644
--- a/content/riak/kv/2.0.0/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.0/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.0/introduction) like [data types](/riak/kv/2.0.0/developing/data-types) or the new [Riak Search](/riak/kv/2.0.0/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.0/introduction) like [data types](/riak/kv/2.0.0/developing/data-types) or the new [Riak search](/riak/kv/2.0.0/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.0/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.0/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.0/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.0/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.0/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.0/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.0/using/admin/riak-admin.md b/content/riak/kv/2.0.0/using/admin/riak-admin.md
index f4c2a9cdd4..9a198c470e 100644
--- a/content/riak/kv/2.0.0/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.0/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.0/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.0/using/cluster-operations/active-anti-entropy.md
index 66334f4cfe..c12c4dba7d 100644
--- a/content/riak/kv/2.0.0/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.0/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.0/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.0/using/cluster-operations/inspecting-node.md
index b9895c8380..fd27bcf6a2 100644
--- a/content/riak/kv/2.0.0/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.0/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.0/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.0/using/cluster-operations/strong-consistency.md
index ccc99d77b5..4df665dd11 100644
--- a/content/riak/kv/2.0.0/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.0/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.0/using/reference/handoff.md b/content/riak/kv/2.0.0/using/reference/handoff.md
index 82604835f3..0feb968913 100644
--- a/content/riak/kv/2.0.0/using/reference/handoff.md
+++ b/content/riak/kv/2.0.0/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.0/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.0/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.0/using/reference/search.md b/content/riak/kv/2.0.0/using/reference/search.md
index e5a284731a..b695b766fd 100644
--- a/content/riak/kv/2.0.0/using/reference/search.md
+++ b/content/riak/kv/2.0.0/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.0/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.0/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.0/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.0/using/reference/secondary-indexes.md b/content/riak/kv/2.0.0/using/reference/secondary-indexes.md
index b064fc593d..3b87715ce6 100644
--- a/content/riak/kv/2.0.0/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.0/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.0/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.0/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.0/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.0/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.0/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.0/using/reference/statistics-monitoring.md
index 621b29b5c6..a4f71d8790 100644
--- a/content/riak/kv/2.0.0/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.0/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.0/using/repair-recovery/errors.md b/content/riak/kv/2.0.0/using/repair-recovery/errors.md
index 704918f000..9de0866512 100644
--- a/content/riak/kv/2.0.0/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.0/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.0/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.0/using/repair-recovery/repairs.md b/content/riak/kv/2.0.0/using/repair-recovery/repairs.md
index cbd1daad97..5efa633978 100644
--- a/content/riak/kv/2.0.0/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.0/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.0/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.0/using/repair-recovery/secondary-indexes.md
index 9645588dd1..a4d49eb84e 100644
--- a/content/riak/kv/2.0.0/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.0/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.0/using/security/basics.md b/content/riak/kv/2.0.0/using/security/basics.md
index d6b54b669a..0a84c0374d 100644
--- a/content/riak/kv/2.0.0/using/security/basics.md
+++ b/content/riak/kv/2.0.0/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.0/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.0/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.0/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.1/configuring/reference.md b/content/riak/kv/2.0.1/configuring/reference.md
index 0f654cee76..ad1963fe4e 100644
--- a/content/riak/kv/2.0.1/configuring/reference.md
+++ b/content/riak/kv/2.0.1/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.1/configuring/search.md b/content/riak/kv/2.0.1/configuring/search.md
index dc458a36ad..bdbbbb393f 100644
--- a/content/riak/kv/2.0.1/configuring/search.md
+++ b/content/riak/kv/2.0.1/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.1/configuring/strong-consistency.md b/content/riak/kv/2.0.1/configuring/strong-consistency.md
index a0afa009a6..90f009ef3d 100644
--- a/content/riak/kv/2.0.1/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.1/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.1/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.1/developing/api/http/delete-search-index.md
index 454621e290..dca2f95099 100644
--- a/content/riak/kv/2.0.1/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.1/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.1/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.1/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.1/developing/api/http/fetch-search-index.md
index d86c56fede..203106db9c 100644
--- a/content/riak/kv/2.0.1/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.1/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.1/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.1/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.1/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.1/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.1/developing/api/http/fetch-search-schema.md
index 7a737ec5f7..d42c8a8328 100644
--- a/content/riak/kv/2.0.1/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.1/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.1/developing/api/http/search-index-info.md b/content/riak/kv/2.0.1/developing/api/http/search-index-info.md
index bbcfca6dc8..676c85bedb 100644
--- a/content/riak/kv/2.0.1/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.1/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.1/developing/api/http/store-search-index.md b/content/riak/kv/2.0.1/developing/api/http/store-search-index.md
index e0e1f2571c..2093da75e0 100644
--- a/content/riak/kv/2.0.1/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.1/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.1/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.1/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.1/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.1/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.1/developing/api/http/store-search-schema.md
index ae5d3ebdeb..50b90b3d36 100644
--- a/content/riak/kv/2.0.1/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.1/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-index-get.md
index 69f907e67d..d8c99499f5 100644
--- a/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.1/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-schema-get.md
index f97bd43969..8b5aa80ccc 100644
--- a/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.1/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.1/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.1/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.1/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.1/developing/app-guide.md b/content/riak/kv/2.0.1/developing/app-guide.md
index e9ddd81831..6bdb19e365 100644
--- a/content/riak/kv/2.0.1/developing/app-guide.md
+++ b/content/riak/kv/2.0.1/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.1/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.1/developing/app-guide/advanced-mapreduce.md
index bb25b981f5..f5fa539c60 100644
--- a/content/riak/kv/2.0.1/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.1/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.1/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.1/developing/app-guide/strong-consistency.md
index 8e9b77161b..04c4e6e7f5 100644
--- a/content/riak/kv/2.0.1/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.1/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.1/developing/data-modeling.md b/content/riak/kv/2.0.1/developing/data-modeling.md
index 57a4bd2ac9..699d80040a 100644
--- a/content/riak/kv/2.0.1/developing/data-modeling.md
+++ b/content/riak/kv/2.0.1/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.1/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.1/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.1/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.1/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.1/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.1/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.1/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.1/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.1/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.1/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.1/developing/data-types.md b/content/riak/kv/2.0.1/developing/data-types.md
index 6367a6b328..504a9c7985 100644
--- a/content/riak/kv/2.0.1/developing/data-types.md
+++ b/content/riak/kv/2.0.1/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.1/developing/usage.md b/content/riak/kv/2.0.1/developing/usage.md
index 9bf59310f9..3779c097a6 100644
--- a/content/riak/kv/2.0.1/developing/usage.md
+++ b/content/riak/kv/2.0.1/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.1/developing/usage/custom-extractors.md b/content/riak/kv/2.0.1/developing/usage/custom-extractors.md
index 9bcf326157..bda3c4cec9 100644
--- a/content/riak/kv/2.0.1/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.1/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.1/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.1/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.1/developing/usage/document-store.md b/content/riak/kv/2.0.1/developing/usage/document-store.md
index 23b83e7be8..80aa8ab207 100644
--- a/content/riak/kv/2.0.1/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.1/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.1/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.1/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.1/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.1/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.1/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.1/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.1/developing/usage/search-schemas.md b/content/riak/kv/2.0.1/developing/usage/search-schemas.md
index 650df65c30..05d8f1f239 100644
--- a/content/riak/kv/2.0.1/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.1/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.1/developing/data-types/), and [more](/riak/kv/2.0.1/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.1/developing/usage/search.md b/content/riak/kv/2.0.1/developing/usage/search.md
index 8e5f761d01..ed78672edd 100644
--- a/content/riak/kv/2.0.1/developing/usage/search.md
+++ b/content/riak/kv/2.0.1/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.1/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.1/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.1/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.1/developing/usage/secondary-indexes.md
index 0e17415886..8bc4f0fd60 100644
--- a/content/riak/kv/2.0.1/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.1/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.1/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.1/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.1/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.1/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.1/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.1/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.1/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.1/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.1/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.1/introduction.md b/content/riak/kv/2.0.1/introduction.md
index a788d0519b..905cfab16e 100644
--- a/content/riak/kv/2.0.1/introduction.md
+++ b/content/riak/kv/2.0.1/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.1/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.1/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.1/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.1/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.1/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.1/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.1/learn/concepts/strong-consistency.md
index 1aa6b09324..5770a0c559 100644
--- a/content/riak/kv/2.0.1/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.1/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.1/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.1/learn/glossary.md b/content/riak/kv/2.0.1/learn/glossary.md
index 85b873435b..36d55d3d4b 100644
--- a/content/riak/kv/2.0.1/learn/glossary.md
+++ b/content/riak/kv/2.0.1/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.1/learn/use-cases.md b/content/riak/kv/2.0.1/learn/use-cases.md
index 118d5883d5..0b52eb4d00 100644
--- a/content/riak/kv/2.0.1/learn/use-cases.md
+++ b/content/riak/kv/2.0.1/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.1/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.1/setup/installing/source/jvm.md b/content/riak/kv/2.0.1/setup/installing/source/jvm.md
index dc43e01c42..f14ac182fb 100644
--- a/content/riak/kv/2.0.1/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.1/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.1/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.1/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.1/setup/planning/backend/bitcask.md
index 773a5af7a3..0c20243890 100644
--- a/content/riak/kv/2.0.1/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.1/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.1/setup/upgrading/checklist.md b/content/riak/kv/2.0.1/setup/upgrading/checklist.md
index f9fefc375f..d02d4ed995 100644
--- a/content/riak/kv/2.0.1/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.1/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.1/setup/upgrading/search.md b/content/riak/kv/2.0.1/setup/upgrading/search.md
index a11ec465e3..7a310a2887 100644
--- a/content/riak/kv/2.0.1/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.1/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.1/setup/upgrading/version.md b/content/riak/kv/2.0.1/setup/upgrading/version.md
index 5c0105df48..ea978ad29d 100644
--- a/content/riak/kv/2.0.1/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.1/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.1/introduction) like [data types](/riak/kv/2.0.1/developing/data-types) or the new [Riak Search](/riak/kv/2.0.1/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.1/introduction) like [data types](/riak/kv/2.0.1/developing/data-types) or the new [Riak search](/riak/kv/2.0.1/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.1/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.1/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.1/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.1/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.1/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.1/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.1/using/admin/riak-admin.md b/content/riak/kv/2.0.1/using/admin/riak-admin.md
index a7c8ba42b5..5a63a66721 100644
--- a/content/riak/kv/2.0.1/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.1/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.1/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.1/using/cluster-operations/active-anti-entropy.md
index 4e1942bf4f..a19aa83860 100644
--- a/content/riak/kv/2.0.1/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.1/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.1/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.1/using/cluster-operations/inspecting-node.md
index 3c86847ee7..b8c446e24a 100644
--- a/content/riak/kv/2.0.1/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.1/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.1/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.1/using/cluster-operations/strong-consistency.md
index d022ad44cc..4fa13ce52f 100644
--- a/content/riak/kv/2.0.1/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.1/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.1/using/reference/handoff.md b/content/riak/kv/2.0.1/using/reference/handoff.md
index 40cb2bba39..986897d8e8 100644
--- a/content/riak/kv/2.0.1/using/reference/handoff.md
+++ b/content/riak/kv/2.0.1/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.1/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.1/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.1/using/reference/search.md b/content/riak/kv/2.0.1/using/reference/search.md
index 2aec5b8a33..e7142d7bd4 100644
--- a/content/riak/kv/2.0.1/using/reference/search.md
+++ b/content/riak/kv/2.0.1/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.1/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.1/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.1/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.1/using/reference/secondary-indexes.md b/content/riak/kv/2.0.1/using/reference/secondary-indexes.md
index a0b053ec5b..2aa8c79f93 100644
--- a/content/riak/kv/2.0.1/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.1/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.1/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.1/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.1/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.1/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.1/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.1/using/reference/statistics-monitoring.md
index 8733319d45..205aeda269 100644
--- a/content/riak/kv/2.0.1/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.1/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.1/using/repair-recovery/errors.md b/content/riak/kv/2.0.1/using/repair-recovery/errors.md
index 69a24b8aef..c015c8000f 100644
--- a/content/riak/kv/2.0.1/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.1/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.1/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.1/using/repair-recovery/repairs.md b/content/riak/kv/2.0.1/using/repair-recovery/repairs.md
index dcb68a9097..ac39fbcf6b 100644
--- a/content/riak/kv/2.0.1/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.1/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.1/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.1/using/repair-recovery/secondary-indexes.md
index eb0b9e9515..7f5537f8db 100644
--- a/content/riak/kv/2.0.1/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.1/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.1/using/security/basics.md b/content/riak/kv/2.0.1/using/security/basics.md
index 516e7ab5b4..f54e3a0b54 100644
--- a/content/riak/kv/2.0.1/using/security/basics.md
+++ b/content/riak/kv/2.0.1/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.1/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.1/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.1/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.2/configuring/reference.md b/content/riak/kv/2.0.2/configuring/reference.md
index 6ab0e762e8..2b5798f2de 100644
--- a/content/riak/kv/2.0.2/configuring/reference.md
+++ b/content/riak/kv/2.0.2/configuring/reference.md
@@ -272,7 +272,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1385,7 +1385,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1629,7 +1629,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2052,7 +2052,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.2/configuring/search.md b/content/riak/kv/2.0.2/configuring/search.md
index d32cd584b9..c4b9b67578 100644
--- a/content/riak/kv/2.0.2/configuring/search.md
+++ b/content/riak/kv/2.0.2/configuring/search.md
@@ -25,9 +25,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -40,7 +40,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -67,7 +67,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -80,7 +80,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.2/configuring/strong-consistency.md b/content/riak/kv/2.0.2/configuring/strong-consistency.md
index ae49819640..8fe7f36f55 100644
--- a/content/riak/kv/2.0.2/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.2/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.2/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.2/developing/api/http/delete-search-index.md
index 037ecd1300..02c5a2d387 100644
--- a/content/riak/kv/2.0.2/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.2/developing/api/http/delete-search-index.md
@@ -14,7 +14,7 @@ aliases:
- /riak/2.0.2/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.2/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.2/developing/api/http/fetch-search-index.md
index f736038ec4..29636e6fd8 100644
--- a/content/riak/kv/2.0.2/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.2/developing/api/http/fetch-search-index.md
@@ -14,7 +14,7 @@ aliases:
- /riak/2.0.2/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.2/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.2/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.2/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.2/developing/api/http/fetch-search-schema.md
index 8d5c51ef7e..cf03cdc677 100644
--- a/content/riak/kv/2.0.2/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.2/developing/api/http/fetch-search-schema.md
@@ -34,4 +34,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.2/developing/api/http/search-index-info.md b/content/riak/kv/2.0.2/developing/api/http/search-index-info.md
index 6c61bab508..7c3750845b 100644
--- a/content/riak/kv/2.0.2/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.2/developing/api/http/search-index-info.md
@@ -46,6 +46,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.2/developing/api/http/store-search-index.md b/content/riak/kv/2.0.2/developing/api/http/store-search-index.md
index bd721a0ae2..500df601de 100644
--- a/content/riak/kv/2.0.2/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.2/developing/api/http/store-search-index.md
@@ -14,7 +14,7 @@ aliases:
- /riak/2.0.2/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.2/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.2/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.2/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.2/developing/api/http/store-search-schema.md
index 0577fd5172..8d149f2c5e 100644
--- a/content/riak/kv/2.0.2/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.2/developing/api/http/store-search-schema.md
@@ -43,7 +43,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-index-get.md
index 46c6c25cc9..61ef8e5ab4 100644
--- a/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-index-get.md
@@ -14,7 +14,7 @@ aliases:
- /riak/2.0.2/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-schema-get.md
index ac06b5726c..034c873e0b 100644
--- a/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.2/developing/api/protocol-buffers/yz-schema-get.md
@@ -14,7 +14,7 @@ aliases:
- /riak/2.0.2/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.2/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.2/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.2/developing/app-guide.md b/content/riak/kv/2.0.2/developing/app-guide.md
index 31a3c3af35..dd98ec093e 100644
--- a/content/riak/kv/2.0.2/developing/app-guide.md
+++ b/content/riak/kv/2.0.2/developing/app-guide.md
@@ -146,22 +146,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -213,7 +213,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -277,13 +277,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -294,7 +294,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -322,7 +322,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.2/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.2/developing/app-guide/advanced-mapreduce.md
index dee43fc668..d98b7c59bb 100644
--- a/content/riak/kv/2.0.2/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.2/developing/app-guide/advanced-mapreduce.md
@@ -73,7 +73,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.2/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.2/developing/app-guide/strong-consistency.md
index 5f1a298c45..9294bb309a 100644
--- a/content/riak/kv/2.0.2/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.2/developing/app-guide/strong-consistency.md
@@ -36,7 +36,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.2/developing/data-modeling.md b/content/riak/kv/2.0.2/developing/data-modeling.md
index bed5734dba..4eb5c6ab8d 100644
--- a/content/riak/kv/2.0.2/developing/data-modeling.md
+++ b/content/riak/kv/2.0.2/developing/data-modeling.md
@@ -139,7 +139,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -223,7 +223,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.2/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.2/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -309,7 +309,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.2/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.2/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.2/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.2/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.2/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -328,7 +328,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.2/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.2/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.2/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.2/developing/data-types.md b/content/riak/kv/2.0.2/developing/data-types.md
index 81cfbbf68d..7712f1c547 100644
--- a/content/riak/kv/2.0.2/developing/data-types.md
+++ b/content/riak/kv/2.0.2/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.2/developing/usage.md b/content/riak/kv/2.0.2/developing/usage.md
index 08264b7528..91c4e2f28f 100644
--- a/content/riak/kv/2.0.2/developing/usage.md
+++ b/content/riak/kv/2.0.2/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.2/developing/usage/custom-extractors.md b/content/riak/kv/2.0.2/developing/usage/custom-extractors.md
index 1051d0ac56..023f3090e3 100644
--- a/content/riak/kv/2.0.2/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.2/developing/usage/custom-extractors.md
@@ -14,8 +14,8 @@ aliases:
- /riak/2.0.2/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -29,7 +29,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.2/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -194,7 +194,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.2/developing/usage/document-store.md b/content/riak/kv/2.0.2/developing/usage/document-store.md
index aae6037332..3a79fe5f1c 100644
--- a/content/riak/kv/2.0.2/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.2/developing/usage/document-store.md
@@ -15,18 +15,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.2/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.2/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.2/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.2/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.2/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.2/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -64,7 +64,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -208,7 +208,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.2/developing/usage/search-schemas.md b/content/riak/kv/2.0.2/developing/usage/search-schemas.md
index 56c84fa4f2..96ca35fb74 100644
--- a/content/riak/kv/2.0.2/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.2/developing/usage/search-schemas.md
@@ -18,21 +18,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.2/developing/data-types/), and [more](/riak/kv/2.0.2/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -46,7 +46,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -122,11 +122,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -174,21 +174,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -209,14 +209,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -261,7 +261,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.2/developing/usage/search.md b/content/riak/kv/2.0.2/developing/usage/search.md
index 44c3ed270b..66b6e06e23 100644
--- a/content/riak/kv/2.0.2/developing/usage/search.md
+++ b/content/riak/kv/2.0.2/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.2/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.2/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.2/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.2/developing/usage/secondary-indexes.md
index 914d652665..b729bcbcd8 100644
--- a/content/riak/kv/2.0.2/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.2/developing/usage/secondary-indexes.md
@@ -18,12 +18,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.2/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.0.2/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.2/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.2/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -36,7 +36,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.2/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.2/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -74,7 +74,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.2/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.2/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -88,7 +88,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.2/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.2/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.2/introduction.md b/content/riak/kv/2.0.2/introduction.md
index 386d7e1c10..97e2c6d906 100644
--- a/content/riak/kv/2.0.2/introduction.md
+++ b/content/riak/kv/2.0.2/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.2/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.2/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.2/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.2/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.2/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.2/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.2/learn/concepts/strong-consistency.md
index f47ba43e8d..df2f167195 100644
--- a/content/riak/kv/2.0.2/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.2/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.2/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.2/learn/glossary.md b/content/riak/kv/2.0.2/learn/glossary.md
index 9d0829e1e9..678bfda6db 100644
--- a/content/riak/kv/2.0.2/learn/glossary.md
+++ b/content/riak/kv/2.0.2/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.2/learn/use-cases.md b/content/riak/kv/2.0.2/learn/use-cases.md
index 0a22098756..f6e8f174f0 100644
--- a/content/riak/kv/2.0.2/learn/use-cases.md
+++ b/content/riak/kv/2.0.2/learn/use-cases.md
@@ -152,7 +152,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -236,7 +236,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -322,7 +322,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.2/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -341,7 +341,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.2/setup/installing/source/jvm.md b/content/riak/kv/2.0.2/setup/installing/source/jvm.md
index eb3b8ce3b1..9b7525d5ce 100644
--- a/content/riak/kv/2.0.2/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.2/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.2/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.2/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.2/setup/planning/backend/bitcask.md
index cdcd5f4be4..52009ea2bc 100644
--- a/content/riak/kv/2.0.2/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.2/setup/planning/backend/bitcask.md
@@ -750,7 +750,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.2/setup/upgrading/checklist.md b/content/riak/kv/2.0.2/setup/upgrading/checklist.md
index 0a005dd067..0af9c2b2ee 100644
--- a/content/riak/kv/2.0.2/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.2/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.2/setup/upgrading/search.md b/content/riak/kv/2.0.2/setup/upgrading/search.md
index 31b049886d..030133bd09 100644
--- a/content/riak/kv/2.0.2/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.2/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.2/setup/upgrading/version.md b/content/riak/kv/2.0.2/setup/upgrading/version.md
index 7f3714565b..03fa44737a 100644
--- a/content/riak/kv/2.0.2/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.2/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.2/introduction) like [data types](/riak/kv/2.0.2/developing/data-types) or the new [Riak Search](/riak/kv/2.0.2/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.2/introduction) like [data types](/riak/kv/2.0.2/developing/data-types) or the new [Riak search](/riak/kv/2.0.2/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.2/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.2/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.2/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.2/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.2/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.2/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.2/using/admin/riak-admin.md b/content/riak/kv/2.0.2/using/admin/riak-admin.md
index 966070df5d..1a848612ca 100644
--- a/content/riak/kv/2.0.2/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.2/using/admin/riak-admin.md
@@ -589,7 +589,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -638,7 +638,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.2/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.2/using/cluster-operations/active-anti-entropy.md
index 6f8d471752..8ca837c067 100644
--- a/content/riak/kv/2.0.2/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.2/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.2/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.2/using/cluster-operations/inspecting-node.md
index 8d42233291..379430cf77 100644
--- a/content/riak/kv/2.0.2/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.2/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.2/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.2/using/cluster-operations/strong-consistency.md
index 44aea9675f..0fa20a47f4 100644
--- a/content/riak/kv/2.0.2/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.2/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.2/using/reference/handoff.md b/content/riak/kv/2.0.2/using/reference/handoff.md
index cee5fe08ad..d2ae81f4bb 100644
--- a/content/riak/kv/2.0.2/using/reference/handoff.md
+++ b/content/riak/kv/2.0.2/using/reference/handoff.md
@@ -120,7 +120,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.2/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.2/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.2/using/reference/search.md b/content/riak/kv/2.0.2/using/reference/search.md
index 614cad7f30..50833d30e2 100644
--- a/content/riak/kv/2.0.2/using/reference/search.md
+++ b/content/riak/kv/2.0.2/using/reference/search.md
@@ -18,14 +18,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.2/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.2/developing/usage/search) document.

@@ -34,30 +34,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.2/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -74,13 +74,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -90,7 +90,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -104,11 +104,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -140,7 +140,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.2/using/reference/secondary-indexes.md b/content/riak/kv/2.0.2/using/reference/secondary-indexes.md
index fa5adeef0e..1d5d884252 100644
--- a/content/riak/kv/2.0.2/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.2/using/reference/secondary-indexes.md
@@ -17,11 +17,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.2/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.0.2/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.2/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.2/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.2/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.2/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.2/using/reference/statistics-monitoring.md
index d7961648f5..66f03d7538 100644
--- a/content/riak/kv/2.0.2/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.2/using/reference/statistics-monitoring.md
@@ -133,7 +133,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.2/using/repair-recovery/errors.md b/content/riak/kv/2.0.2/using/repair-recovery/errors.md
index ef220ca2cf..cf7cd48f60 100644
--- a/content/riak/kv/2.0.2/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.2/using/repair-recovery/errors.md
@@ -327,7 +327,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.2/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.2/using/repair-recovery/repairs.md b/content/riak/kv/2.0.2/using/repair-recovery/repairs.md
index 09746fb8ca..c806cc9b52 100644
--- a/content/riak/kv/2.0.2/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.2/using/repair-recovery/repairs.md
@@ -53,7 +53,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.2/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.2/using/repair-recovery/secondary-indexes.md
index 5eabd02aab..6aa5496dc1 100644
--- a/content/riak/kv/2.0.2/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.2/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.2/using/security/basics.md b/content/riak/kv/2.0.2/using/security/basics.md
index 7b6e9b870c..6b688c77d7 100644
--- a/content/riak/kv/2.0.2/using/security/basics.md
+++ b/content/riak/kv/2.0.2/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.2/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.2/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.2/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.4/configuring/reference.md b/content/riak/kv/2.0.4/configuring/reference.md
index a593caea14..4614a80c77 100644
--- a/content/riak/kv/2.0.4/configuring/reference.md
+++ b/content/riak/kv/2.0.4/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.4/configuring/search.md b/content/riak/kv/2.0.4/configuring/search.md
index 3e71d3e77f..ee91a08902 100644
--- a/content/riak/kv/2.0.4/configuring/search.md
+++ b/content/riak/kv/2.0.4/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.4/configuring/strong-consistency.md b/content/riak/kv/2.0.4/configuring/strong-consistency.md
index 9ab903a500..4325c6db95 100644
--- a/content/riak/kv/2.0.4/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.4/configuring/strong-consistency.md
@@ -39,7 +39,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.4/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.4/developing/api/http/delete-search-index.md
index 5ed1d975b3..87c7da4778 100644
--- a/content/riak/kv/2.0.4/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.4/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.4/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.4/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.4/developing/api/http/fetch-search-index.md
index a59e22dbcf..32dfd25489 100644
--- a/content/riak/kv/2.0.4/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.4/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.4/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.4/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.4/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.4/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.4/developing/api/http/fetch-search-schema.md
index aa22782ff2..4f8c188889 100644
--- a/content/riak/kv/2.0.4/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.4/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.4/developing/api/http/search-index-info.md b/content/riak/kv/2.0.4/developing/api/http/search-index-info.md
index 765c454e2e..071b704e4f 100644
--- a/content/riak/kv/2.0.4/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.4/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.4/developing/api/http/store-search-index.md b/content/riak/kv/2.0.4/developing/api/http/store-search-index.md
index b4aff0289e..9a05211e5c 100644
--- a/content/riak/kv/2.0.4/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.4/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.4/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.4/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.4/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.4/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.4/developing/api/http/store-search-schema.md
index 96a2b26326..c94578fd06 100644
--- a/content/riak/kv/2.0.4/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.4/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-index-get.md
index 8d2d65466c..0768b513e5 100644
--- a/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.4/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-schema-get.md
index c4e693af30..6752ab0209 100644
--- a/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.4/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.4/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.4/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.4/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.4/developing/app-guide.md b/content/riak/kv/2.0.4/developing/app-guide.md
index 9d1842246b..4c410c8c68 100644
--- a/content/riak/kv/2.0.4/developing/app-guide.md
+++ b/content/riak/kv/2.0.4/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.4/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.4/developing/app-guide/advanced-mapreduce.md
index 066dea0fcc..beee4b1f8d 100644
--- a/content/riak/kv/2.0.4/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.4/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.4/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.4/developing/app-guide/strong-consistency.md
index 1df0190e7f..fa763839e7 100644
--- a/content/riak/kv/2.0.4/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.4/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.4/developing/data-modeling.md b/content/riak/kv/2.0.4/developing/data-modeling.md
index 7c9e6aabf0..448b0a003a 100644
--- a/content/riak/kv/2.0.4/developing/data-modeling.md
+++ b/content/riak/kv/2.0.4/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.4/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.4/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.4/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.4/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.4/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.4/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.4/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.4/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.4/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.4/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.4/developing/data-types.md b/content/riak/kv/2.0.4/developing/data-types.md
index 2154d0e10e..9ac63eda47 100644
--- a/content/riak/kv/2.0.4/developing/data-types.md
+++ b/content/riak/kv/2.0.4/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.4/developing/usage.md b/content/riak/kv/2.0.4/developing/usage.md
index 157c3d85d5..88427cf9ae 100644
--- a/content/riak/kv/2.0.4/developing/usage.md
+++ b/content/riak/kv/2.0.4/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.4/developing/usage/custom-extractors.md b/content/riak/kv/2.0.4/developing/usage/custom-extractors.md
index 35d4044a4e..01d1ff9e85 100644
--- a/content/riak/kv/2.0.4/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.4/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.4/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.4/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.4/developing/usage/document-store.md b/content/riak/kv/2.0.4/developing/usage/document-store.md
index af9fe5a5f4..bd751f6ad4 100644
--- a/content/riak/kv/2.0.4/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.4/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.4/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.4/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.4/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.4/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.4/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.4/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.4/developing/usage/search-schemas.md b/content/riak/kv/2.0.4/developing/usage/search-schemas.md
index 8bba6caf25..319223f788 100644
--- a/content/riak/kv/2.0.4/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.4/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.4/developing/data-types/), and [more](/riak/kv/2.0.4/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.4/developing/usage/search.md b/content/riak/kv/2.0.4/developing/usage/search.md
index 2101395347..3373328368 100644
--- a/content/riak/kv/2.0.4/developing/usage/search.md
+++ b/content/riak/kv/2.0.4/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.4/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.4/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.4/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.4/developing/usage/secondary-indexes.md
index 46175ce969..64a1f9ba12 100644
--- a/content/riak/kv/2.0.4/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.4/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.4/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.4/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.4/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.4/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.4/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.4/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.4/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.4/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.4/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.4/introduction.md b/content/riak/kv/2.0.4/introduction.md
index 38ecddd184..9ee2a5fbdc 100644
--- a/content/riak/kv/2.0.4/introduction.md
+++ b/content/riak/kv/2.0.4/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.4/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.4/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.4/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.4/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.4/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.4/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.4/learn/concepts/strong-consistency.md
index be226b519c..294f944f57 100644
--- a/content/riak/kv/2.0.4/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.4/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.4/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.4/learn/glossary.md b/content/riak/kv/2.0.4/learn/glossary.md
index 0e5291413f..32d0ff0cd3 100644
--- a/content/riak/kv/2.0.4/learn/glossary.md
+++ b/content/riak/kv/2.0.4/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.4/learn/use-cases.md b/content/riak/kv/2.0.4/learn/use-cases.md
index 5b4cb5a60d..cbd97281ee 100644
--- a/content/riak/kv/2.0.4/learn/use-cases.md
+++ b/content/riak/kv/2.0.4/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.4/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.4/setup/installing/source/jvm.md b/content/riak/kv/2.0.4/setup/installing/source/jvm.md
index a72bcd02f8..54ccf2ac3e 100644
--- a/content/riak/kv/2.0.4/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.4/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.4/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.4/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.4/setup/planning/backend/bitcask.md
index 57a69882b1..60fc2bbd2f 100644
--- a/content/riak/kv/2.0.4/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.4/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.4/setup/upgrading/checklist.md b/content/riak/kv/2.0.4/setup/upgrading/checklist.md
index 53d2a3a477..f662d8d3a2 100644
--- a/content/riak/kv/2.0.4/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.4/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.4/setup/upgrading/search.md b/content/riak/kv/2.0.4/setup/upgrading/search.md
index 38dc18c145..043ae72561 100644
--- a/content/riak/kv/2.0.4/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.4/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.4/setup/upgrading/version.md b/content/riak/kv/2.0.4/setup/upgrading/version.md
index 63816fd74e..22c0403b33 100644
--- a/content/riak/kv/2.0.4/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.4/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.4/introduction) like [data types](/riak/kv/2.0.4/developing/data-types) or the new [Riak Search](/riak/kv/2.0.4/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.4/introduction) like [data types](/riak/kv/2.0.4/developing/data-types) or the new [Riak search](/riak/kv/2.0.4/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.4/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.4/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.4/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.4/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.4/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.4/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.4/using/admin/riak-admin.md b/content/riak/kv/2.0.4/using/admin/riak-admin.md
index 4543d8f450..ff74005c5e 100644
--- a/content/riak/kv/2.0.4/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.4/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.4/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.4/using/cluster-operations/active-anti-entropy.md
index a96fb7ed47..2c47870754 100644
--- a/content/riak/kv/2.0.4/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.4/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.4/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.4/using/cluster-operations/inspecting-node.md
index 1844f2813a..9b6cedd8c1 100644
--- a/content/riak/kv/2.0.4/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.4/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.4/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.4/using/cluster-operations/strong-consistency.md
index b218e665e3..21df0a4de9 100644
--- a/content/riak/kv/2.0.4/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.4/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.4/using/reference/handoff.md b/content/riak/kv/2.0.4/using/reference/handoff.md
index 9c298096e4..87eee9ca5b 100644
--- a/content/riak/kv/2.0.4/using/reference/handoff.md
+++ b/content/riak/kv/2.0.4/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.4/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.4/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.4/using/reference/search.md b/content/riak/kv/2.0.4/using/reference/search.md
index 6daaece142..11e6056985 100644
--- a/content/riak/kv/2.0.4/using/reference/search.md
+++ b/content/riak/kv/2.0.4/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.4/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.4/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.4/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.4/using/reference/secondary-indexes.md b/content/riak/kv/2.0.4/using/reference/secondary-indexes.md
index 4d21e77f6e..7c72c027c8 100644
--- a/content/riak/kv/2.0.4/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.4/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.4/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.4/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.4/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.4/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.4/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.4/using/reference/statistics-monitoring.md
index 27030c9a79..76b95469b7 100644
--- a/content/riak/kv/2.0.4/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.4/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.4/using/repair-recovery/errors.md b/content/riak/kv/2.0.4/using/repair-recovery/errors.md
index 0c37e09be1..91deb849ae 100644
--- a/content/riak/kv/2.0.4/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.4/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.4/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.4/using/repair-recovery/repairs.md b/content/riak/kv/2.0.4/using/repair-recovery/repairs.md
index 748a3ee06d..703b03ab44 100644
--- a/content/riak/kv/2.0.4/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.4/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.4/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.4/using/repair-recovery/secondary-indexes.md
index 07f8d1b241..01bc01e361 100644
--- a/content/riak/kv/2.0.4/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.4/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.4/using/security/basics.md b/content/riak/kv/2.0.4/using/security/basics.md
index 404932c630..ba5eb74921 100644
--- a/content/riak/kv/2.0.4/using/security/basics.md
+++ b/content/riak/kv/2.0.4/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.4/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.4/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.4/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.5/configuring/reference.md b/content/riak/kv/2.0.5/configuring/reference.md
index a9f12ff3a9..0c7f194c10 100644
--- a/content/riak/kv/2.0.5/configuring/reference.md
+++ b/content/riak/kv/2.0.5/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.5/configuring/search.md b/content/riak/kv/2.0.5/configuring/search.md
index f8e1059fd6..ab23c2ad3c 100644
--- a/content/riak/kv/2.0.5/configuring/search.md
+++ b/content/riak/kv/2.0.5/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.5/configuring/strong-consistency.md b/content/riak/kv/2.0.5/configuring/strong-consistency.md
index 31ebc6717b..9744df043d 100644
--- a/content/riak/kv/2.0.5/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.5/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.5/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.5/developing/api/http/delete-search-index.md
index 1ab0877ea5..d5fb973480 100644
--- a/content/riak/kv/2.0.5/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.5/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.5/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.5/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.5/developing/api/http/fetch-search-index.md
index acc5d9d092..78eb33be79 100644
--- a/content/riak/kv/2.0.5/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.5/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.5/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.5/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.5/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.5/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.5/developing/api/http/fetch-search-schema.md
index 8c7bc7e4a8..bb15cfc722 100644
--- a/content/riak/kv/2.0.5/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.5/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.5/developing/api/http/search-index-info.md b/content/riak/kv/2.0.5/developing/api/http/search-index-info.md
index a4b0ed9f18..1515af208b 100644
--- a/content/riak/kv/2.0.5/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.5/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.5/developing/api/http/store-search-index.md b/content/riak/kv/2.0.5/developing/api/http/store-search-index.md
index c25a1b0f06..6f3f1f6cc8 100644
--- a/content/riak/kv/2.0.5/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.5/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.5/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.5/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.5/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.5/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.5/developing/api/http/store-search-schema.md
index 317fe26f03..8b16d194b8 100644
--- a/content/riak/kv/2.0.5/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.5/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-index-get.md
index e01974edaf..227b1950ba 100644
--- a/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.5/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-schema-get.md
index 8b3ac85378..73f5130692 100644
--- a/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.5/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.5/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.5/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.5/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.5/developing/app-guide.md b/content/riak/kv/2.0.5/developing/app-guide.md
index e8bf7773a3..1d97fdfd37 100644
--- a/content/riak/kv/2.0.5/developing/app-guide.md
+++ b/content/riak/kv/2.0.5/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.5/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.5/developing/app-guide/advanced-mapreduce.md
index ad4a74a325..7ea2161507 100644
--- a/content/riak/kv/2.0.5/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.5/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.5/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.5/developing/app-guide/strong-consistency.md
index 4891784013..99fbf6fb6e 100644
--- a/content/riak/kv/2.0.5/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.5/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.5/developing/data-modeling.md b/content/riak/kv/2.0.5/developing/data-modeling.md
index e8df0566bf..a4d633c76e 100644
--- a/content/riak/kv/2.0.5/developing/data-modeling.md
+++ b/content/riak/kv/2.0.5/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.5/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.5/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.5/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.5/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.5/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.5/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.5/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.5/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.5/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.5/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.5/developing/data-types.md b/content/riak/kv/2.0.5/developing/data-types.md
index 0c1c9cbf7f..27bdd0cae7 100644
--- a/content/riak/kv/2.0.5/developing/data-types.md
+++ b/content/riak/kv/2.0.5/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.5/developing/usage.md b/content/riak/kv/2.0.5/developing/usage.md
index a391d17983..0dc9a744d8 100644
--- a/content/riak/kv/2.0.5/developing/usage.md
+++ b/content/riak/kv/2.0.5/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.5/developing/usage/custom-extractors.md b/content/riak/kv/2.0.5/developing/usage/custom-extractors.md
index 34e389a181..6d47961dff 100644
--- a/content/riak/kv/2.0.5/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.5/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.5/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.5/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.5/developing/usage/document-store.md b/content/riak/kv/2.0.5/developing/usage/document-store.md
index 365d0bc8ab..e9bd75aeb7 100644
--- a/content/riak/kv/2.0.5/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.5/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.5/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.5/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.5/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.5/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.5/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.5/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.5/developing/usage/search-schemas.md b/content/riak/kv/2.0.5/developing/usage/search-schemas.md
index 947dc82c7c..f6ba8bf457 100644
--- a/content/riak/kv/2.0.5/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.5/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.5/developing/data-types/), and [more](/riak/kv/2.0.5/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.5/developing/usage/search.md b/content/riak/kv/2.0.5/developing/usage/search.md
index 0687e2420b..8e23a23152 100644
--- a/content/riak/kv/2.0.5/developing/usage/search.md
+++ b/content/riak/kv/2.0.5/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.5/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.5/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.5/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.5/developing/usage/secondary-indexes.md
index c7e47bd2c7..e42f4e7022 100644
--- a/content/riak/kv/2.0.5/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.5/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.5/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.5/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.5/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.5/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.5/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.5/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.5/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.5/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.5/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.5/introduction.md b/content/riak/kv/2.0.5/introduction.md
index 14c6331fcc..0664d2da91 100644
--- a/content/riak/kv/2.0.5/introduction.md
+++ b/content/riak/kv/2.0.5/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.5/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.5/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.5/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.5/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.5/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.5/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.5/learn/concepts/strong-consistency.md
index dc6fd4876c..54495bdd93 100644
--- a/content/riak/kv/2.0.5/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.5/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.5/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.5/learn/glossary.md b/content/riak/kv/2.0.5/learn/glossary.md
index 0aa070d3aa..6a7807d7b6 100644
--- a/content/riak/kv/2.0.5/learn/glossary.md
+++ b/content/riak/kv/2.0.5/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.5/learn/use-cases.md b/content/riak/kv/2.0.5/learn/use-cases.md
index 489c3fc4d5..ab63ab5ce9 100644
--- a/content/riak/kv/2.0.5/learn/use-cases.md
+++ b/content/riak/kv/2.0.5/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.5/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.5/setup/installing/source/jvm.md b/content/riak/kv/2.0.5/setup/installing/source/jvm.md
index 4fdf313710..acac3c4668 100644
--- a/content/riak/kv/2.0.5/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.5/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.5/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.5/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.5/setup/planning/backend/bitcask.md
index 2d25bdb42b..5f069c496c 100644
--- a/content/riak/kv/2.0.5/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.5/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.5/setup/upgrading/checklist.md b/content/riak/kv/2.0.5/setup/upgrading/checklist.md
index 6eca99d89c..fd5e7478c0 100644
--- a/content/riak/kv/2.0.5/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.5/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.5/setup/upgrading/search.md b/content/riak/kv/2.0.5/setup/upgrading/search.md
index 110aae3fc2..726bcaf64f 100644
--- a/content/riak/kv/2.0.5/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.5/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.5/setup/upgrading/version.md b/content/riak/kv/2.0.5/setup/upgrading/version.md
index 20c081801a..5c61eb33cb 100644
--- a/content/riak/kv/2.0.5/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.5/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.5/introduction) like [data types](/riak/kv/2.0.5/developing/data-types) or the new [Riak Search](/riak/kv/2.0.5/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.5/introduction) like [data types](/riak/kv/2.0.5/developing/data-types) or the new [Riak search](/riak/kv/2.0.5/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.5/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.5/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.5/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.5/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.5/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.5/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.5/using/admin/riak-admin.md b/content/riak/kv/2.0.5/using/admin/riak-admin.md
index 6d105e3ee9..78088db26c 100644
--- a/content/riak/kv/2.0.5/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.5/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.5/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.5/using/cluster-operations/active-anti-entropy.md
index 04433fd439..2e713ce9b3 100644
--- a/content/riak/kv/2.0.5/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.5/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.5/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.5/using/cluster-operations/inspecting-node.md
index 4625ed197e..a4f0176888 100644
--- a/content/riak/kv/2.0.5/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.5/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.5/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.5/using/cluster-operations/strong-consistency.md
index 33e7bfbe94..5e87e94667 100644
--- a/content/riak/kv/2.0.5/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.5/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.5/using/reference/handoff.md b/content/riak/kv/2.0.5/using/reference/handoff.md
index 1aa50ca197..4b68c025f5 100644
--- a/content/riak/kv/2.0.5/using/reference/handoff.md
+++ b/content/riak/kv/2.0.5/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.5/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.5/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.5/using/reference/search.md b/content/riak/kv/2.0.5/using/reference/search.md
index 11280311d0..af63bd29b6 100644
--- a/content/riak/kv/2.0.5/using/reference/search.md
+++ b/content/riak/kv/2.0.5/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.5/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.5/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.5/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.5/using/reference/secondary-indexes.md b/content/riak/kv/2.0.5/using/reference/secondary-indexes.md
index 4826a58778..fa1edda02d 100644
--- a/content/riak/kv/2.0.5/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.5/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.5/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.5/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.5/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.5/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.5/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.5/using/reference/statistics-monitoring.md
index 24ce35d9dd..6da1a6c3f7 100644
--- a/content/riak/kv/2.0.5/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.5/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.5/using/repair-recovery/errors.md b/content/riak/kv/2.0.5/using/repair-recovery/errors.md
index 9a9f311d5c..079897c036 100644
--- a/content/riak/kv/2.0.5/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.5/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.5/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.5/using/repair-recovery/repairs.md b/content/riak/kv/2.0.5/using/repair-recovery/repairs.md
index 821d6d6c8f..195f794224 100644
--- a/content/riak/kv/2.0.5/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.5/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.5/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.5/using/repair-recovery/secondary-indexes.md
index 593ab7c11a..d6d76d7ad2 100644
--- a/content/riak/kv/2.0.5/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.5/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.5/using/security/basics.md b/content/riak/kv/2.0.5/using/security/basics.md
index d05615febb..701b6835c0 100644
--- a/content/riak/kv/2.0.5/using/security/basics.md
+++ b/content/riak/kv/2.0.5/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.5/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.5/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.5/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.6/configuring/reference.md b/content/riak/kv/2.0.6/configuring/reference.md
index 88dcab0f6d..bd61696453 100644
--- a/content/riak/kv/2.0.6/configuring/reference.md
+++ b/content/riak/kv/2.0.6/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.6/configuring/search.md b/content/riak/kv/2.0.6/configuring/search.md
index 74538ab18d..81a27e8421 100644
--- a/content/riak/kv/2.0.6/configuring/search.md
+++ b/content/riak/kv/2.0.6/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.6/configuring/strong-consistency.md b/content/riak/kv/2.0.6/configuring/strong-consistency.md
index 8d363cb785..6c269592d0 100644
--- a/content/riak/kv/2.0.6/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.6/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.6/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.6/developing/api/http/delete-search-index.md
index 3c0040b591..66334fe62f 100644
--- a/content/riak/kv/2.0.6/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.6/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.6/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.6/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.6/developing/api/http/fetch-search-index.md
index c5474f7a03..21535d999f 100644
--- a/content/riak/kv/2.0.6/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.6/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.6/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.6/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.6/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.6/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.6/developing/api/http/fetch-search-schema.md
index 91fc83fb22..388efe4ae9 100644
--- a/content/riak/kv/2.0.6/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.6/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.6/developing/api/http/search-index-info.md b/content/riak/kv/2.0.6/developing/api/http/search-index-info.md
index c153e99aec..7985155ef4 100644
--- a/content/riak/kv/2.0.6/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.6/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.6/developing/api/http/store-search-index.md b/content/riak/kv/2.0.6/developing/api/http/store-search-index.md
index 2bffc7368d..39c66de78d 100644
--- a/content/riak/kv/2.0.6/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.6/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.6/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.6/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.6/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.6/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.6/developing/api/http/store-search-schema.md
index 66d0b76015..8fc2314b80 100644
--- a/content/riak/kv/2.0.6/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.6/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-index-get.md
index 7018e35207..944a859318 100644
--- a/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.6/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-schema-get.md
index d28f487af5..adeb36d5e6 100644
--- a/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.6/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.6/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.6/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.6/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.6/developing/app-guide.md b/content/riak/kv/2.0.6/developing/app-guide.md
index d832add64f..5bb5234af5 100644
--- a/content/riak/kv/2.0.6/developing/app-guide.md
+++ b/content/riak/kv/2.0.6/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.6/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.6/developing/app-guide/advanced-mapreduce.md
index 1d65b63c6e..9f0beea067 100644
--- a/content/riak/kv/2.0.6/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.6/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.6/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.6/developing/app-guide/strong-consistency.md
index 4669740b5f..657d123bc3 100644
--- a/content/riak/kv/2.0.6/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.6/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.6/developing/data-modeling.md b/content/riak/kv/2.0.6/developing/data-modeling.md
index 9d56a651dd..458da1d02d 100644
--- a/content/riak/kv/2.0.6/developing/data-modeling.md
+++ b/content/riak/kv/2.0.6/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.6/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.6/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.6/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.6/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.6/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.6/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.6/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.6/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.6/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.6/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.6/developing/data-types.md b/content/riak/kv/2.0.6/developing/data-types.md
index 143ae032ee..4aec7e51b6 100644
--- a/content/riak/kv/2.0.6/developing/data-types.md
+++ b/content/riak/kv/2.0.6/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.6/developing/usage.md b/content/riak/kv/2.0.6/developing/usage.md
index a8ad333dbd..772eba5745 100644
--- a/content/riak/kv/2.0.6/developing/usage.md
+++ b/content/riak/kv/2.0.6/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.6/developing/usage/custom-extractors.md b/content/riak/kv/2.0.6/developing/usage/custom-extractors.md
index e61e531e19..96c678bd3e 100644
--- a/content/riak/kv/2.0.6/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.6/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.6/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.6/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.6/developing/usage/document-store.md b/content/riak/kv/2.0.6/developing/usage/document-store.md
index 55ac1a3943..688bed877d 100644
--- a/content/riak/kv/2.0.6/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.6/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.6/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.6/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.6/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.6/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.6/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.6/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.6/developing/usage/search-schemas.md b/content/riak/kv/2.0.6/developing/usage/search-schemas.md
index 002b88b7d2..4f5c6431d0 100644
--- a/content/riak/kv/2.0.6/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.6/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.6/developing/data-types/), and [more](/riak/kv/2.0.6/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.6/developing/usage/search.md b/content/riak/kv/2.0.6/developing/usage/search.md
index 693e7acad3..b000d2b48a 100644
--- a/content/riak/kv/2.0.6/developing/usage/search.md
+++ b/content/riak/kv/2.0.6/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.6/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.6/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.6/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.6/developing/usage/secondary-indexes.md
index 983c1e0091..1e9a9a6a89 100644
--- a/content/riak/kv/2.0.6/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.6/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.6/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.6/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.6/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.6/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.6/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.6/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.6/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.6/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.6/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.6/introduction.md b/content/riak/kv/2.0.6/introduction.md
index b3d9795c25..05288b54da 100644
--- a/content/riak/kv/2.0.6/introduction.md
+++ b/content/riak/kv/2.0.6/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.6/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.6/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.6/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.6/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.6/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.6/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.6/learn/concepts/strong-consistency.md
index 249c8ace22..fcff8bb512 100644
--- a/content/riak/kv/2.0.6/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.6/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.6/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.6/learn/glossary.md b/content/riak/kv/2.0.6/learn/glossary.md
index 127b521db2..f8441cd7e7 100644
--- a/content/riak/kv/2.0.6/learn/glossary.md
+++ b/content/riak/kv/2.0.6/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.6/learn/use-cases.md b/content/riak/kv/2.0.6/learn/use-cases.md
index 76877cf6d1..18d128acf9 100644
--- a/content/riak/kv/2.0.6/learn/use-cases.md
+++ b/content/riak/kv/2.0.6/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.6/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.6/release-notes.md b/content/riak/kv/2.0.6/release-notes.md
index c168872e8e..fe11f7b949 100644
--- a/content/riak/kv/2.0.6/release-notes.md
+++ b/content/riak/kv/2.0.6/release-notes.md
@@ -20,10 +20,10 @@ This is a bugfix release addressing minor issues and making some improvements fo
## Bugs Fixed
-* [[Issue #481](https://github.com/basho/yokozuna/issues/481)/[PR #486](https://github.com/basho/yokozuna/pull/486)] - Riak Search was losing entries when YZ AAE trees expired. To address this, we fixed how we dealt with `default` bucket types when building yokozuna hashtrees.
+* [[Issue #481](https://github.com/basho/yokozuna/issues/481)/[PR #486](https://github.com/basho/yokozuna/pull/486)] - Riak search was losing entries when YZ AAE trees expired. To address this, we fixed how we dealt with `default` bucket types when building yokozuna hashtrees.
* [[Issue #723](https://github.com/basho/riak/issues/723)/[PR #482](https://github.com/basho/yokozuna/pull/482) & [PR #773](https://github.com/basho/riak_test/pull/773)] - Search did not return consistent results when indexing a `bucket-type` with `sets` in a `map`. Now, a check for `map` embedded fields and counts is run, and the `default_schema` has been updated to return sets in query responses by storing them.
* [[Issue #70](https://github.com/basho/riak_ensemble/issues/70)/[PR #75](https://github.com/basho/riak_ensemble/pull/75)] - Some clusters were unable to start ensembles due to a block on ensemble peers within the leveldb synctree. Now leveldb synctree lock behavior is limited to local node.
-* [[Issue #450](https://github.com/basho/yokozuna/issues/450)/[PR #459](https://github.com/basho/yokozuna/pull/459)] - Riak Search AAE was throwing errors when one was using keys/buckets/bucket_types with spaces.
+* [[Issue #450](https://github.com/basho/yokozuna/issues/450)/[PR #459](https://github.com/basho/yokozuna/pull/459)] - Riak search AAE was throwing errors when one was using keys/buckets/bucket_types with spaces.
* [[Issue #469](https://github.com/basho/yokozuna/pull/469)/[PR #470](https://github.com/basho/yokozuna/pull/470)] - Fix YZ stats name typo from 'throughtput' to 'throughput'.
* [[Issue #437](https://github.com/basho/yokozuna/issues/437)/[PR #458](https://github.com/basho/yokozuna/pull/458)] - `yz_events:handle_info` called with bad arguments.
* [[Issue #402](https://github.com/basho/yokozuna/pull/402)/[PR #463](https://github.com/basho/yokozuna/pull/463) & [PR #476](https://github.com/basho/yokozuna/pull/476) & [PR #515](https://github.com/basho/yokozuna/pull/515) & [PR #509](https://github.com/basho/yokozuna/pull/509)] - When creating a new search index via HTTP, HTTP responded before the index was available. Now you can change timeout via `index_put_timeout_ms` in the yokozuna section of advanced config.
diff --git a/content/riak/kv/2.0.6/setup/installing/source/jvm.md b/content/riak/kv/2.0.6/setup/installing/source/jvm.md
index 6115026ef9..5d1be1a974 100644
--- a/content/riak/kv/2.0.6/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.6/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.6/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.6/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.6/setup/planning/backend/bitcask.md
index 48c79b4377..5aeb7d0259 100644
--- a/content/riak/kv/2.0.6/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.6/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.6/setup/upgrading/checklist.md b/content/riak/kv/2.0.6/setup/upgrading/checklist.md
index 23456f5314..403dc822de 100644
--- a/content/riak/kv/2.0.6/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.6/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.6/setup/upgrading/search.md b/content/riak/kv/2.0.6/setup/upgrading/search.md
index 72195de2e4..42c650325a 100644
--- a/content/riak/kv/2.0.6/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.6/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.6/setup/upgrading/version.md b/content/riak/kv/2.0.6/setup/upgrading/version.md
index 2ebc90f10e..6551157f11 100644
--- a/content/riak/kv/2.0.6/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.6/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.6/introduction) like [data types](/riak/kv/2.0.6/developing/data-types) or the new [Riak Search](/riak/kv/2.0.6/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.6/introduction) like [data types](/riak/kv/2.0.6/developing/data-types) or the new [Riak search](/riak/kv/2.0.6/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.6/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.6/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.6/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.0.6/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.6/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.6/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.6/using/admin/riak-admin.md b/content/riak/kv/2.0.6/using/admin/riak-admin.md
index 1a560a40c4..e93adb278d 100644
--- a/content/riak/kv/2.0.6/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.6/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.6/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.6/using/cluster-operations/active-anti-entropy.md
index b7306354bd..c0ac1a66a3 100644
--- a/content/riak/kv/2.0.6/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.6/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.6/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.6/using/cluster-operations/inspecting-node.md
index 6c8266d9e5..7218eda2b9 100644
--- a/content/riak/kv/2.0.6/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.6/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.6/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.6/using/cluster-operations/strong-consistency.md
index cf57ed6465..0aea7e7123 100644
--- a/content/riak/kv/2.0.6/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.6/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.6/using/reference/handoff.md b/content/riak/kv/2.0.6/using/reference/handoff.md
index be240bd527..e12e5bb185 100644
--- a/content/riak/kv/2.0.6/using/reference/handoff.md
+++ b/content/riak/kv/2.0.6/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.6/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.6/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.6/using/reference/search.md b/content/riak/kv/2.0.6/using/reference/search.md
index 3fe649c4a7..6bbb88f1aa 100644
--- a/content/riak/kv/2.0.6/using/reference/search.md
+++ b/content/riak/kv/2.0.6/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.6/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.6/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.6/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.0.6/using/reference/secondary-indexes.md b/content/riak/kv/2.0.6/using/reference/secondary-indexes.md
index f15e2bfa53..ed4245ad1a 100644
--- a/content/riak/kv/2.0.6/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.6/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.6/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.6/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.6/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.6/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.6/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.6/using/reference/statistics-monitoring.md
index 90d7d878ab..db6a07e980 100644
--- a/content/riak/kv/2.0.6/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.6/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.6/using/repair-recovery/errors.md b/content/riak/kv/2.0.6/using/repair-recovery/errors.md
index 9b344fa482..296f8037b8 100644
--- a/content/riak/kv/2.0.6/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.6/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.6/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.6/using/repair-recovery/repairs.md b/content/riak/kv/2.0.6/using/repair-recovery/repairs.md
index 0d0af159c8..5709bbfe58 100644
--- a/content/riak/kv/2.0.6/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.6/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.6/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.6/using/repair-recovery/secondary-indexes.md
index 0a1e871e11..3337169ecf 100644
--- a/content/riak/kv/2.0.6/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.6/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.6/using/security/basics.md b/content/riak/kv/2.0.6/using/security/basics.md
index bc6bdd7bbc..7233852afe 100644
--- a/content/riak/kv/2.0.6/using/security/basics.md
+++ b/content/riak/kv/2.0.6/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.6/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.6/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.6/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.0.7/configuring/reference.md b/content/riak/kv/2.0.7/configuring/reference.md
index be25645095..f9d8d0691c 100644
--- a/content/riak/kv/2.0.7/configuring/reference.md
+++ b/content/riak/kv/2.0.7/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.0.7/configuring/search.md b/content/riak/kv/2.0.7/configuring/search.md
index ce1bb5a09f..a20bd9bde5 100644
--- a/content/riak/kv/2.0.7/configuring/search.md
+++ b/content/riak/kv/2.0.7/configuring/search.md
@@ -25,7 +25,7 @@ aliases:
[security index]: /riak/kv/2.0.7/using/security/
-This document covers how to use the Riak Search (with
+This document covers how to use the Riak search (with
[Solr](http://lucene.apache.org/solr/) integration) subsystem from an
operational perspective.
@@ -43,7 +43,7 @@ If you are looking developer-focused docs, we recommend the following:
We'll be walking through:
1. [Prequisites][#prerequisites]
-2. [Enable Riak Search][#enabling-riak-search]
+2. [Enable Riak search][#enabling-riak-search]
3. [Riak.conf Configuration Settings][#riak-config-settings]
4. [Additional Solr Information][#more-on-solr]
@@ -61,7 +61,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Enabling Riak Search
-Riak Search is not enabled by default, so you must enable it in every
+Riak search is not enabled by default, so you must enable it in every
node's [configuration file][config reference] as follows:
```riak.conf
@@ -78,7 +78,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -87,7 +87,7 @@ Field | Default | Valid values | Description
`search.queue.batch.minimum` | `1` | Integer | The minimum batch size, in number of Riak objects. Any batches that are smaller than this amount will not be immediately flushed to Solr, but are guaranteed to be flushed within the `search.queue.batch.flush_interval`.
`search.queue.batch.maximum`| `100` | Integer | The maximim batch size, in number of Riak objects. Any batches that are larger than this amount will be split, where the first `search.queue.batch.maximum` objects will be flushed to Solr and the remaining objects enqueued for that index will be retained until the next batch is delivered. This parameter ensures that at most `search.queue.batch.maximum` objects will be delivered into Solr in any given request.
`search.queue.batch.flush_interval` | `1000` | `ms`, `s`, `m`, `h` | The maximum delay between notification to flush batches to Solr. This setting is used to increase or decrease the frequency of batch delivery into Solr, specifically for relatively low-volume input into Riak. This setting ensures that data will be delivered into Solr in accordance with the `search.queue.batch.minimum` and `search.queue.batch.maximum` settings within the specified interval. Batches that are smaller than `search.queue.batch.minimum` will be delivered to Solr within this interval. This setting will generally have no effect on heavily loaded systems. You may use any time unit; the default is in milliseconds.
-`search.queue.high_watermark` | `10000` | Integer | The queue high water mark. If the total number of queued messages in a Solrq worker instance exceed this limit, then the calling vnode will be blocked until the total number falls below this limit. This parameter exercises flow control between Riak and the Riak Search batching subsystem, if writes into Solr start to fall behind.
+`search.queue.high_watermark` | `10000` | Integer | The queue high water mark. If the total number of queued messages in a Solrq worker instance exceed this limit, then the calling vnode will be blocked until the total number falls below this limit. This parameter exercises flow control between Riak and the Riak search batching subsystem, if writes into Solr start to fall behind.
`search.queue.worker_count` | `10` | Integer | The number of Solr queue workers to instantiate. Solr queue workers are responsible for enqueing objects for insertion or update into Solr. Increasing the number of Solrq workers distributes the queuing of objects and can lead to greater throughput under high load, potentially at the expense of smaller batch sizes.
`search.queue.helper_count` | `10` | Integer | The number of Solr queue helpers to instantiate. Solr queue helpers are responsible for delivering batches of data into Solr. Increasing the number of Solrq helpers will increase concurrent writes into Solr.
`search.index.error_threshold.failure_count` | `3` | Integer | The number of failures encountered while updating a search index within `search.queue.error_threshold.failure_interval` before Riak will skip updates to that index.
@@ -99,7 +99,7 @@ Field | Default | Valid values | Description
## More on Solr
### Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.0.7/configuring/strong-consistency.md b/content/riak/kv/2.0.7/configuring/strong-consistency.md
index 0fbcd0a95a..4de28c5483 100644
--- a/content/riak/kv/2.0.7/configuring/strong-consistency.md
+++ b/content/riak/kv/2.0.7/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.0.7/developing/api/http/delete-search-index.md b/content/riak/kv/2.0.7/developing/api/http/delete-search-index.md
index b30b9c1762..06892f8019 100644
--- a/content/riak/kv/2.0.7/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.0.7/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.7/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.0.7/developing/api/http/fetch-search-index.md b/content/riak/kv/2.0.7/developing/api/http/fetch-search-index.md
index 1d99ac0110..4b75a3e7fd 100644
--- a/content/riak/kv/2.0.7/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.0.7/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.7/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.0.7/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.0.7/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.7/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.0.7/developing/api/http/fetch-search-schema.md
index be49150737..89793e0c4c 100644
--- a/content/riak/kv/2.0.7/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.0.7/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.0.7/developing/api/http/search-index-info.md b/content/riak/kv/2.0.7/developing/api/http/search-index-info.md
index bb5dacdac9..564d13487c 100644
--- a/content/riak/kv/2.0.7/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.0.7/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.7/developing/api/http/store-search-index.md b/content/riak/kv/2.0.7/developing/api/http/store-search-index.md
index ca22b59e27..388d72757c 100644
--- a/content/riak/kv/2.0.7/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.0.7/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.7/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.0.7/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.0.7/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.0.7/developing/api/http/store-search-schema.md b/content/riak/kv/2.0.7/developing/api/http/store-search-schema.md
index 19230b224b..f7203d9bc0 100644
--- a/content/riak/kv/2.0.7/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.0.7/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-index-get.md
index fc0f28b812..b12958b0ae 100644
--- a/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.7/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-schema-get.md
index 6c4b89046a..5a834bbce6 100644
--- a/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.0.7/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.0.7/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.0.7/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.0.7/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.0.7/developing/app-guide.md b/content/riak/kv/2.0.7/developing/app-guide.md
index 1c69e93e3a..fedd7a6ed0 100644
--- a/content/riak/kv/2.0.7/developing/app-guide.md
+++ b/content/riak/kv/2.0.7/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.0.7/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.0.7/developing/app-guide/advanced-mapreduce.md
index 4a39e3b935..472d97d60b 100644
--- a/content/riak/kv/2.0.7/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.0.7/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.0.7/developing/app-guide/strong-consistency.md b/content/riak/kv/2.0.7/developing/app-guide/strong-consistency.md
index f52908f325..bbe38f6c79 100644
--- a/content/riak/kv/2.0.7/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.0.7/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.0.7/developing/data-modeling.md b/content/riak/kv/2.0.7/developing/data-modeling.md
index 97a7840e5b..b31e9f36a0 100644
--- a/content/riak/kv/2.0.7/developing/data-modeling.md
+++ b/content/riak/kv/2.0.7/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.0.7/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.0.7/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.7/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.0.7/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.7/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.0.7/developing/usage/search/) or [using secondary indexes](/riak/kv/2.0.7/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.0.7/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.0.7/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.0.7/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.7/developing/data-types.md b/content/riak/kv/2.0.7/developing/data-types.md
index f40edf4bac..bcd384ac27 100644
--- a/content/riak/kv/2.0.7/developing/data-types.md
+++ b/content/riak/kv/2.0.7/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.0.7/developing/usage.md b/content/riak/kv/2.0.7/developing/usage.md
index 7dfbb41c74..05cbd7d715 100644
--- a/content/riak/kv/2.0.7/developing/usage.md
+++ b/content/riak/kv/2.0.7/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.0.7/developing/usage/custom-extractors.md b/content/riak/kv/2.0.7/developing/usage/custom-extractors.md
index 7d8cf675df..f9e366e592 100644
--- a/content/riak/kv/2.0.7/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.0.7/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.0.7/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.0.7/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.0.7/developing/usage/document-store.md b/content/riak/kv/2.0.7/developing/usage/document-store.md
index 07fc0cd496..81a2eb9eb3 100644
--- a/content/riak/kv/2.0.7/developing/usage/document-store.md
+++ b/content/riak/kv/2.0.7/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.0.7/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.7/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.0.7/developing/usage/search/) and [Riak Data Types](/riak/kv/2.0.7/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.0.7/developing/data-types/#maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.0.7/developing/data-types/#maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.0.7/developing/usage/search-schemas.md b/content/riak/kv/2.0.7/developing/usage/search-schemas.md
index 61890d332e..5823e3dee0 100644
--- a/content/riak/kv/2.0.7/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.0.7/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.0.7/developing/data-types/), and [more](/riak/kv/2.0.7/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.0.7/developing/usage/search.md b/content/riak/kv/2.0.7/developing/usage/search.md
index 94cbf5d907..ffd3b5447e 100644
--- a/content/riak/kv/2.0.7/developing/usage/search.md
+++ b/content/riak/kv/2.0.7/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.0.7/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.0.7/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.0.7/developing/usage/secondary-indexes.md b/content/riak/kv/2.0.7/developing/usage/secondary-indexes.md
index 4d2f0d96fa..41f58e60db 100644
--- a/content/riak/kv/2.0.7/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.0.7/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.0.7/setup/planning/backend/memory
[use ref strong consistency]: /riak/kv/2.0.7/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.7/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.0.7/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.0.7/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.0.7/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.0.7/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.0.7/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.0.7/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.0.7/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.0.7/introduction.md b/content/riak/kv/2.0.7/introduction.md
index a624d3ee7f..cc08cbbbcb 100644
--- a/content/riak/kv/2.0.7/introduction.md
+++ b/content/riak/kv/2.0.7/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.0.7/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.0.7/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.0.7/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.0.7/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.0.7/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.0.7/learn/concepts/strong-consistency.md b/content/riak/kv/2.0.7/learn/concepts/strong-consistency.md
index 8bccc858c2..e878fc28a7 100644
--- a/content/riak/kv/2.0.7/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.0.7/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.0.7/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.0.7/learn/glossary.md b/content/riak/kv/2.0.7/learn/glossary.md
index 1c32701f7c..7af468def9 100644
--- a/content/riak/kv/2.0.7/learn/glossary.md
+++ b/content/riak/kv/2.0.7/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.0.7/learn/use-cases.md b/content/riak/kv/2.0.7/learn/use-cases.md
index fe34d82008..7e0afd73ef 100644
--- a/content/riak/kv/2.0.7/learn/use-cases.md
+++ b/content/riak/kv/2.0.7/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.0.7/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.0.7/release-notes.md b/content/riak/kv/2.0.7/release-notes.md
index a7f128949a..20674e8744 100644
--- a/content/riak/kv/2.0.7/release-notes.md
+++ b/content/riak/kv/2.0.7/release-notes.md
@@ -30,9 +30,9 @@ This release includes fixes for two product advisories:
## New Features
-* We've introduced a new batching system for Riak Search so indexing calls are no longer made synchronously when data is written to Riak. This allows Solr to process the data in chunks and Riak to move forward accepting new work at the vnode level without waiting for the call to Solr to happen. Out-of-the-box performance should be similar to Riak 2.0.6 with Search enabled. However, additional configuration options (see "Cuttlefish configurations…" below) will allow you to set the batching parameters based on your needs and have, in certain cases, led to significantly higher write throughput to Solr.
+* We've introduced a new batching system for Riak search so indexing calls are no longer made synchronously when data is written to Riak. This allows Solr to process the data in chunks and Riak to move forward accepting new work at the vnode level without waiting for the call to Solr to happen. Out-of-the-box performance should be similar to Riak 2.0.6 with Search enabled. However, additional configuration options (see "Cuttlefish configurations…" below) will allow you to set the batching parameters based on your needs and have, in certain cases, led to significantly higher write throughput to Solr.
* [[PR #648](https://github.com/basho/yokozuna/pull/648)]
-* Cuttlefish configurations have been updated to support the Riak Search batching updates. These configs are tunable via the riak.conf file. (Note: Changes to this file require a restart of Riak). You can control the behavior of batching through various [new Cuttlefish parameters](http://docs.basho.com/riak/kv/2.1.4/configuring/reference/#search). These parameters guide Cuttlefish operation, Solr integration, and statistics on Riak performance.
+* Cuttlefish configurations have been updated to support the Riak search batching updates. These configs are tunable via the riak.conf file. (Note: Changes to this file require a restart of Riak). You can control the behavior of batching through various [new Cuttlefish parameters](http://docs.basho.com/riak/kv/2.1.4/configuring/reference/#search). These parameters guide Cuttlefish operation, Solr integration, and statistics on Riak performance.
* [[PR #614](https://github.com/basho/yokozuna/pull/614)]
* Our Erlang/OTP has been updated to version R16B02_basho10 and included in this release. This update includes bugfixes and improvements for ERTS, as well as bugfixes for SSL.
* You can read the complete release notes for Erlang/OTP [here](https://github.com/basho/otp/blob/basho-otp-16/BASHO-RELEASES.md).
@@ -58,7 +58,7 @@ message was sent to the endpoint. This bug also caused mochiweb to return an err
* [[Issue #796](https://github.com/basho/riak/issues/796)/[PR #798](https://github.com/basho/riak/pull/798)] The default Solaris 10 version of awk doesn't support gsub, so we've switched to xpg4 awk (nawk) instead. The tar on Solaris 10 has no support for creating compressed tar.gz files, so the tar files will be piped into gzip instead. And, finally, non-bash (e.g. ksh) shells may not have support for single instances of double quotes nested in single quotes, so we have escaped nested double quotes.
* [[Issue #804](https://github.com/basho/riak_core/issues/804)/[exometer PR #67](https://github.com/Feuerlabs/exometer_core/pull/67), [exometer PR #10](https://github.com/basho/exometer_core/pull/10), & [PR #817](https://github.com/basho/riak_core/pull/817)] When a node contains 2 vnodes, and each of those vnodes reports a 0 value for a statistic, those statistics (in exometer) were being thrown out due to some other special case handling. That handling has now been moved to the one function that needs it, rather than the general-purpose `exometer_histogram:get_value` where it was originally coded.
* [[PR #1370](https://github.com/basho/riak_kv/pull/1370)] A race condition could cause small innacuracies in the stats if two processes tried to update the data for the same index at the same time. Write operations are now synchronized via the `global:trans/3` function.
-* [[Issue #503](https://github.com/basho/yokozuna/issues/503)/[PR #528](https://github.com/basho/yokozuna/pull/528)] Riak Search limited the maximum size of a search query if you used the highlight or facet feature, so we added the ability to handle POSTs for search queries when given the content-type `application/x-www-form-urlencoded`. A `415` error is returned if another content-type is used.
+* [[Issue #503](https://github.com/basho/yokozuna/issues/503)/[PR #528](https://github.com/basho/yokozuna/pull/528)] Riak search limited the maximum size of a search query if you used the highlight or facet feature, so we added the ability to handle POSTs for search queries when given the content-type `application/x-www-form-urlencoded`. A `415` error is returned if another content-type is used.
* [[Issue #1178](https://github.com/basho/riak_kv/issues/1178)/[repl PR #742](https://github.com/basho/riak_repl/pull/742)] **This fix applies ONLY to Riak Enterprise.** `riak_kv_get_fsm:start_link` did not consistently link the caller to the new FSM process. This would cause the supervisor to end up with an endlessly growing list of workers, since it had no way of seeing when a worker died. These issues could cause extended shutdown times as the supervisor attempts to iterate through millions of dead PIDs. To fix this issue, the process is now started directly rather than via the supervisor API call. Since these processes are normally started under a sidejob, there is no reason to run them under a supervisor. **Note:** We recommend not disabling overload protection. If you use replication and completely disable overload protection, you may run into issues.
* [[PR #830](https://github.com/basho/riak_core/pull/830)] Several bugs were found with hash trees that, in rare cases, could cause AAE to fail to repair missing data.
* [[Bitcask PR #227](https://github.com/basho/bitcask/pull/227)] A try/after block has been added around the `hintfile_validate_loop/3` to keep descriptors from being leaked each time Bitcask is opened.
diff --git a/content/riak/kv/2.0.7/setup/installing/source/jvm.md b/content/riak/kv/2.0.7/setup/installing/source/jvm.md
index 7a1c2ef5a9..118cdef72e 100644
--- a/content/riak/kv/2.0.7/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.0.7/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.0.7/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.0.7/setup/planning/backend/bitcask.md b/content/riak/kv/2.0.7/setup/planning/backend/bitcask.md
index b1b7132ad1..b320fe9cb1 100644
--- a/content/riak/kv/2.0.7/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.0.7/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.0.7/setup/upgrading/checklist.md b/content/riak/kv/2.0.7/setup/upgrading/checklist.md
index af90ef82fa..1f3adef29f 100644
--- a/content/riak/kv/2.0.7/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.0.7/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.0.7/setup/upgrading/search.md b/content/riak/kv/2.0.7/setup/upgrading/search.md
index dc141323c0..5d800e09a5 100644
--- a/content/riak/kv/2.0.7/setup/upgrading/search.md
+++ b/content/riak/kv/2.0.7/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.0.7/setup/upgrading/version.md b/content/riak/kv/2.0.7/setup/upgrading/version.md
index d97ac19764..734e8923ad 100644
--- a/content/riak/kv/2.0.7/setup/upgrading/version.md
+++ b/content/riak/kv/2.0.7/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.0.7/introduction) like [data types](/riak/kv/2.0.7/developing/data-types) or the new [Riak Search](/riak/kv/2.0.7/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.0.7/introduction) like [data types](/riak/kv/2.0.7/developing/data-types) or the new [Riak search](/riak/kv/2.0.7/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.0.7/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.0.7/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.0.7/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/kv/2.0.7/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.0.7/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.0.7/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.0.7/using/admin/riak-admin.md b/content/riak/kv/2.0.7/using/admin/riak-admin.md
index 1f475137b2..61cac02494 100644
--- a/content/riak/kv/2.0.7/using/admin/riak-admin.md
+++ b/content/riak/kv/2.0.7/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.0.7/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.0.7/using/cluster-operations/active-anti-entropy.md
index 481a3d3f3d..e07c3518b9 100644
--- a/content/riak/kv/2.0.7/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.0.7/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.0.7/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.0.7/using/cluster-operations/inspecting-node.md
index b86a7b0844..99f18d408e 100644
--- a/content/riak/kv/2.0.7/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.0.7/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.0.7/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.0.7/using/cluster-operations/strong-consistency.md
index c1dc8fe7af..1b0177f380 100644
--- a/content/riak/kv/2.0.7/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.0.7/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.0.7/using/reference/handoff.md b/content/riak/kv/2.0.7/using/reference/handoff.md
index 5ff2674149..8825825d86 100644
--- a/content/riak/kv/2.0.7/using/reference/handoff.md
+++ b/content/riak/kv/2.0.7/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.0.7/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.0.7/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.0.7/using/reference/search.md b/content/riak/kv/2.0.7/using/reference/search.md
index 574ab3b296..c342c8c23f 100644
--- a/content/riak/kv/2.0.7/using/reference/search.md
+++ b/content/riak/kv/2.0.7/using/reference/search.md
@@ -20,14 +20,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.0.7/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.0.7/developing/usage/search) document.

@@ -36,30 +36,30 @@ Search, you should check out the [Using Search](/riak/kv/2.0.7/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -76,13 +76,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -92,7 +92,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -106,11 +106,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -142,7 +142,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
@@ -405,7 +405,7 @@ one with a smaller window.
## Statistics
-The Riak Search batching subsystem provides statistics on run-time characteristics of search system components. These statistics are accessible via the standard Riak KV stats interfaces and can be monitored through standard enterprise management tools.
+The Riak search batching subsystem provides statistics on run-time characteristics of search system components. These statistics are accessible via the standard Riak KV stats interfaces and can be monitored through standard enterprise management tools.
* `search_index_throughput_(count|one)` - The total count of objects that have been indexed, per Riak node, and the count of objects that have been indexed within the metric measurement window.
diff --git a/content/riak/kv/2.0.7/using/reference/secondary-indexes.md b/content/riak/kv/2.0.7/using/reference/secondary-indexes.md
index 550c2f1575..a0f5f51c64 100644
--- a/content/riak/kv/2.0.7/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.0.7/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.0.7/developing/usage/bucket-types
[use ref strong consistency]: /riak/kv/2.0.7/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.0.7/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.0.7/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.0.7/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.0.7/using/reference/statistics-monitoring.md b/content/riak/kv/2.0.7/using/reference/statistics-monitoring.md
index 45e4b14bbc..76e19273a2 100644
--- a/content/riak/kv/2.0.7/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.0.7/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.0.7/using/repair-recovery/errors.md b/content/riak/kv/2.0.7/using/repair-recovery/errors.md
index 83fccda84c..5241763bcb 100644
--- a/content/riak/kv/2.0.7/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.0.7/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.0.7/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.0.7/using/repair-recovery/repairs.md b/content/riak/kv/2.0.7/using/repair-recovery/repairs.md
index 40aa8e6cad..9ef8c5cd6d 100644
--- a/content/riak/kv/2.0.7/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.0.7/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.7/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.0.7/using/repair-recovery/secondary-indexes.md
index 226896c5f0..c96d618bb3 100644
--- a/content/riak/kv/2.0.7/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.0.7/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.0.7/using/security/basics.md b/content/riak/kv/2.0.7/using/security/basics.md
index 1082aa246a..f06c7655f9 100644
--- a/content/riak/kv/2.0.7/using/security/basics.md
+++ b/content/riak/kv/2.0.7/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.0.7/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.0.7/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.0.7/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.1.1/configuring/reference.md b/content/riak/kv/2.1.1/configuring/reference.md
index 17de2de70a..dd2bf2697b 100644
--- a/content/riak/kv/2.1.1/configuring/reference.md
+++ b/content/riak/kv/2.1.1/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.1.1/configuring/search.md b/content/riak/kv/2.1.1/configuring/search.md
index bac0e68dc2..130a684bf7 100644
--- a/content/riak/kv/2.1.1/configuring/search.md
+++ b/content/riak/kv/2.1.1/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.1.1/configuring/strong-consistency.md b/content/riak/kv/2.1.1/configuring/strong-consistency.md
index 2085891937..aa3b0b1855 100644
--- a/content/riak/kv/2.1.1/configuring/strong-consistency.md
+++ b/content/riak/kv/2.1.1/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.1.1/developing/api/http/delete-search-index.md b/content/riak/kv/2.1.1/developing/api/http/delete-search-index.md
index c8658a0f00..7897434c79 100644
--- a/content/riak/kv/2.1.1/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.1.1/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.1/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.1.1/developing/api/http/fetch-search-index.md b/content/riak/kv/2.1.1/developing/api/http/fetch-search-index.md
index e351c584ca..3bc3d3bc9e 100644
--- a/content/riak/kv/2.1.1/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.1.1/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.1/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.1.1/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.1.1/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.1/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.1.1/developing/api/http/fetch-search-schema.md
index 393aa87222..0f4aaa05ba 100644
--- a/content/riak/kv/2.1.1/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.1.1/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.1.1/developing/api/http/search-index-info.md b/content/riak/kv/2.1.1/developing/api/http/search-index-info.md
index bbee3324a4..7e008a5b7b 100644
--- a/content/riak/kv/2.1.1/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.1.1/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.1/developing/api/http/store-search-index.md b/content/riak/kv/2.1.1/developing/api/http/store-search-index.md
index f60dbdef63..fd8a0ce7b6 100644
--- a/content/riak/kv/2.1.1/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.1.1/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.1/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.1.1/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.1.1/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.1/developing/api/http/store-search-schema.md b/content/riak/kv/2.1.1/developing/api/http/store-search-schema.md
index 7584722b44..41e9ef577e 100644
--- a/content/riak/kv/2.1.1/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.1.1/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-index-get.md
index 978f010db2..8c4341055e 100644
--- a/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.1/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-schema-get.md
index fb2942c2d7..0cf8d4680a 100644
--- a/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.1.1/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.1/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.1.1/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.1.1/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.1.1/developing/app-guide.md b/content/riak/kv/2.1.1/developing/app-guide.md
index 28f0e81a32..f1b9f7ae1a 100644
--- a/content/riak/kv/2.1.1/developing/app-guide.md
+++ b/content/riak/kv/2.1.1/developing/app-guide.md
@@ -148,22 +148,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -215,7 +215,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -283,13 +283,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -300,7 +300,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -328,7 +328,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.1.1/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.1.1/developing/app-guide/advanced-mapreduce.md
index b78cfa8bac..ae1eb5d669 100644
--- a/content/riak/kv/2.1.1/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.1.1/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.1.1/developing/app-guide/strong-consistency.md b/content/riak/kv/2.1.1/developing/app-guide/strong-consistency.md
index 2c59f1cf75..9b49392733 100644
--- a/content/riak/kv/2.1.1/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.1.1/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.1.1/developing/data-modeling.md b/content/riak/kv/2.1.1/developing/data-modeling.md
index 5283cee0fc..57f00dee6d 100644
--- a/content/riak/kv/2.1.1/developing/data-modeling.md
+++ b/content/riak/kv/2.1.1/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.1.1/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.1.1/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.1/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.1.1/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.1/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.1.1/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.1/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.1.1/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.1.1/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.1.1/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.1/developing/data-types.md b/content/riak/kv/2.1.1/developing/data-types.md
index a44864a8f1..e09303f47d 100644
--- a/content/riak/kv/2.1.1/developing/data-types.md
+++ b/content/riak/kv/2.1.1/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.1.1/developing/usage.md b/content/riak/kv/2.1.1/developing/usage.md
index 853efe9c3b..1fd0b798ff 100644
--- a/content/riak/kv/2.1.1/developing/usage.md
+++ b/content/riak/kv/2.1.1/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.1.1/developing/usage/custom-extractors.md b/content/riak/kv/2.1.1/developing/usage/custom-extractors.md
index 167d76bcfd..90cabdb318 100644
--- a/content/riak/kv/2.1.1/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.1.1/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.1.1/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.1.1/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.1.1/developing/usage/document-store.md b/content/riak/kv/2.1.1/developing/usage/document-store.md
index f0139e8890..0fb7a73f41 100644
--- a/content/riak/kv/2.1.1/developing/usage/document-store.md
+++ b/content/riak/kv/2.1.1/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.1.1/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.1/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.1.1/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.1/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.1.1/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.1.1/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.1.1/developing/usage/search-schemas.md b/content/riak/kv/2.1.1/developing/usage/search-schemas.md
index 017ec41e81..48c9078b0d 100644
--- a/content/riak/kv/2.1.1/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.1.1/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.1.1/developing/data-types/), and [more](/riak/kv/2.1.1/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on
GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
@@ -48,7 +48,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -124,11 +124,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -176,21 +176,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -211,14 +211,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -263,7 +263,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.1.1/developing/usage/search.md b/content/riak/kv/2.1.1/developing/usage/search.md
index 8b2cfac926..f0db0a16f4 100644
--- a/content/riak/kv/2.1.1/developing/usage/search.md
+++ b/content/riak/kv/2.1.1/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.1.1/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.1.1/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.1.1/developing/usage/secondary-indexes.md b/content/riak/kv/2.1.1/developing/usage/secondary-indexes.md
index 8dd8731f41..a4c3ded32d 100644
--- a/content/riak/kv/2.1.1/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.1.1/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.1.1/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.1/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.1.1/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.1.1/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.1.1/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.1.1/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.1.1/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.1.1/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.1.1/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.1.1/introduction.md b/content/riak/kv/2.1.1/introduction.md
index c312e8c08d..02290c809b 100644
--- a/content/riak/kv/2.1.1/introduction.md
+++ b/content/riak/kv/2.1.1/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.1.1/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.1.1/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.1.1/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.1.1/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.1.1/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.1.1/learn/concepts/strong-consistency.md b/content/riak/kv/2.1.1/learn/concepts/strong-consistency.md
index 383858112b..a03d3043c9 100644
--- a/content/riak/kv/2.1.1/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.1.1/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.1.1/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.1.1/learn/glossary.md b/content/riak/kv/2.1.1/learn/glossary.md
index 3ff2172e22..75915d029f 100644
--- a/content/riak/kv/2.1.1/learn/glossary.md
+++ b/content/riak/kv/2.1.1/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.1.1/learn/use-cases.md b/content/riak/kv/2.1.1/learn/use-cases.md
index 3e63422279..6d0cb5117e 100644
--- a/content/riak/kv/2.1.1/learn/use-cases.md
+++ b/content/riak/kv/2.1.1/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.1/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.1/setup/installing/source/jvm.md b/content/riak/kv/2.1.1/setup/installing/source/jvm.md
index db44f104bd..681932379b 100644
--- a/content/riak/kv/2.1.1/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.1.1/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.1.1/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.1.1/setup/planning/backend/bitcask.md b/content/riak/kv/2.1.1/setup/planning/backend/bitcask.md
index 48ec18a9df..69a4cb0825 100644
--- a/content/riak/kv/2.1.1/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.1.1/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.1.1/setup/upgrading/checklist.md b/content/riak/kv/2.1.1/setup/upgrading/checklist.md
index 4dbc340778..84f3b3b3f6 100644
--- a/content/riak/kv/2.1.1/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.1.1/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.1.1/setup/upgrading/search.md b/content/riak/kv/2.1.1/setup/upgrading/search.md
index b4070e0388..ea03e45b05 100644
--- a/content/riak/kv/2.1.1/setup/upgrading/search.md
+++ b/content/riak/kv/2.1.1/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.1.1/setup/upgrading/version.md b/content/riak/kv/2.1.1/setup/upgrading/version.md
index 5ce5d4a588..bd8306eb04 100644
--- a/content/riak/kv/2.1.1/setup/upgrading/version.md
+++ b/content/riak/kv/2.1.1/setup/upgrading/version.md
@@ -36,7 +36,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.1.1/introduction) like [data types](/riak/kv/2.1.1/developing/data-types) or the new [Riak Search](/riak/kv/2.1.1/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.1.1/introduction) like [data types](/riak/kv/2.1.1/developing/data-types) or the new [Riak search](/riak/kv/2.1.1/using/reference/search).
## Bucket Types
@@ -140,7 +140,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.1.1/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.1.1/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.1.1/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.1.1/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.1.1/developing/data-types)
@@ -208,7 +208,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.1.1/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.1.1/using/admin/riak-admin.md b/content/riak/kv/2.1.1/using/admin/riak-admin.md
index 06b373cee9..c7d69ada71 100644
--- a/content/riak/kv/2.1.1/using/admin/riak-admin.md
+++ b/content/riak/kv/2.1.1/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.1.1/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.1.1/using/cluster-operations/active-anti-entropy.md
index 4d193d72cc..b52283bc0b 100644
--- a/content/riak/kv/2.1.1/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.1.1/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.1.1/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.1.1/using/cluster-operations/inspecting-node.md
index 85fee91c66..74f02a48fd 100644
--- a/content/riak/kv/2.1.1/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.1.1/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.1.1/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.1.1/using/cluster-operations/strong-consistency.md
index c6ca114763..6cc9887b2e 100644
--- a/content/riak/kv/2.1.1/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.1.1/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.1.1/using/reference/handoff.md b/content/riak/kv/2.1.1/using/reference/handoff.md
index c6114c515b..b58031cc7c 100644
--- a/content/riak/kv/2.1.1/using/reference/handoff.md
+++ b/content/riak/kv/2.1.1/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.1.1/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.1.1/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.1.1/using/reference/search.md b/content/riak/kv/2.1.1/using/reference/search.md
index 5091de0c42..c08a66eee9 100644
--- a/content/riak/kv/2.1.1/using/reference/search.md
+++ b/content/riak/kv/2.1.1/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.1.1/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.1.1/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.1.1/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.1.1/using/reference/secondary-indexes.md b/content/riak/kv/2.1.1/using/reference/secondary-indexes.md
index 8c496c488a..fbf41d1e20 100644
--- a/content/riak/kv/2.1.1/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.1.1/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.1.1/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.1/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.1.1/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.1.1/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.1.1/using/reference/statistics-monitoring.md b/content/riak/kv/2.1.1/using/reference/statistics-monitoring.md
index 8b0fc49344..4af0d7fc47 100644
--- a/content/riak/kv/2.1.1/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.1.1/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.1.1/using/repair-recovery/errors.md b/content/riak/kv/2.1.1/using/repair-recovery/errors.md
index 67d1dd4a36..076baa516b 100644
--- a/content/riak/kv/2.1.1/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.1.1/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.1.1/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.1.1/using/repair-recovery/repairs.md b/content/riak/kv/2.1.1/using/repair-recovery/repairs.md
index 8e426cac1b..124d363d89 100644
--- a/content/riak/kv/2.1.1/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.1.1/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.1/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.1.1/using/repair-recovery/secondary-indexes.md
index 6a81bea36e..280f707a96 100644
--- a/content/riak/kv/2.1.1/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.1.1/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.1/using/security/basics.md b/content/riak/kv/2.1.1/using/security/basics.md
index c903f8b65f..ce2dabd422 100644
--- a/content/riak/kv/2.1.1/using/security/basics.md
+++ b/content/riak/kv/2.1.1/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.1.1/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.1.1/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.1.1/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.1.3/configuring/reference.md b/content/riak/kv/2.1.3/configuring/reference.md
index cb7189a599..fa91c557c4 100644
--- a/content/riak/kv/2.1.3/configuring/reference.md
+++ b/content/riak/kv/2.1.3/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.1.3/configuring/search.md b/content/riak/kv/2.1.3/configuring/search.md
index 09b641b375..d140203c43 100644
--- a/content/riak/kv/2.1.3/configuring/search.md
+++ b/content/riak/kv/2.1.3/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.1.3/configuring/strong-consistency.md b/content/riak/kv/2.1.3/configuring/strong-consistency.md
index 50732dc429..12de11ae55 100644
--- a/content/riak/kv/2.1.3/configuring/strong-consistency.md
+++ b/content/riak/kv/2.1.3/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.1.3/developing/api/http/delete-search-index.md b/content/riak/kv/2.1.3/developing/api/http/delete-search-index.md
index 95dcfed1f3..c93b5f4cf7 100644
--- a/content/riak/kv/2.1.3/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.1.3/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.3/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.1.3/developing/api/http/fetch-search-index.md b/content/riak/kv/2.1.3/developing/api/http/fetch-search-index.md
index 19ad293ade..6256467825 100644
--- a/content/riak/kv/2.1.3/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.1.3/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.3/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.1.3/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.1.3/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.3/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.1.3/developing/api/http/fetch-search-schema.md
index 334457d40a..70bc649e0b 100644
--- a/content/riak/kv/2.1.3/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.1.3/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.1.3/developing/api/http/search-index-info.md b/content/riak/kv/2.1.3/developing/api/http/search-index-info.md
index 86111eda78..0f42fa0dc4 100644
--- a/content/riak/kv/2.1.3/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.1.3/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.3/developing/api/http/store-search-index.md b/content/riak/kv/2.1.3/developing/api/http/store-search-index.md
index 13fd8ddec9..ab46a073f4 100644
--- a/content/riak/kv/2.1.3/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.1.3/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.3/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.1.3/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.1.3/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.3/developing/api/http/store-search-schema.md b/content/riak/kv/2.1.3/developing/api/http/store-search-schema.md
index a64a531c19..f4ba71899d 100644
--- a/content/riak/kv/2.1.3/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.1.3/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-index-get.md
index b946e5c7a0..766c448278 100644
--- a/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.3/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-schema-get.md
index 0d9a7856c7..bb09d51dd5 100644
--- a/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.1.3/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.3/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.1.3/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.1.3/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.1.3/developing/app-guide.md b/content/riak/kv/2.1.3/developing/app-guide.md
index 45af6fd1f9..4bd8b37ee4 100644
--- a/content/riak/kv/2.1.3/developing/app-guide.md
+++ b/content/riak/kv/2.1.3/developing/app-guide.md
@@ -149,22 +149,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -216,7 +216,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -284,13 +284,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -301,7 +301,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -329,7 +329,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.1.3/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.1.3/developing/app-guide/advanced-mapreduce.md
index b1df8d6317..b522847f8b 100644
--- a/content/riak/kv/2.1.3/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.1.3/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.1.3/developing/app-guide/strong-consistency.md b/content/riak/kv/2.1.3/developing/app-guide/strong-consistency.md
index 36f71ef1f2..3e1919cfd8 100644
--- a/content/riak/kv/2.1.3/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.1.3/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.1.3/developing/data-modeling.md b/content/riak/kv/2.1.3/developing/data-modeling.md
index 510d882540..705d77d344 100644
--- a/content/riak/kv/2.1.3/developing/data-modeling.md
+++ b/content/riak/kv/2.1.3/developing/data-modeling.md
@@ -141,7 +141,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -225,7 +225,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.1.3/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.1.3/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -311,7 +311,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.3/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.1.3/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.3/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.1.3/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.3/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -330,7 +330,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.1.3/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.1.3/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.1.3/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.3/developing/data-types.md b/content/riak/kv/2.1.3/developing/data-types.md
index bde1822f29..a9efbec1af 100644
--- a/content/riak/kv/2.1.3/developing/data-types.md
+++ b/content/riak/kv/2.1.3/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.1.3/developing/usage.md b/content/riak/kv/2.1.3/developing/usage.md
index c28e033340..39d6df31d3 100644
--- a/content/riak/kv/2.1.3/developing/usage.md
+++ b/content/riak/kv/2.1.3/developing/usage.md
@@ -108,7 +108,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.1.3/developing/usage/custom-extractors.md b/content/riak/kv/2.1.3/developing/usage/custom-extractors.md
index 2db0a44924..411e014f0f 100644
--- a/content/riak/kv/2.1.3/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.1.3/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.1.3/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.1.3/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.1.3/developing/usage/document-store.md b/content/riak/kv/2.1.3/developing/usage/document-store.md
index 52c32938ce..93b8c88939 100644
--- a/content/riak/kv/2.1.3/developing/usage/document-store.md
+++ b/content/riak/kv/2.1.3/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.1.3/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.3/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.1.3/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.3/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.1.3/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.1.3/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.1.3/developing/usage/search-schemas.md b/content/riak/kv/2.1.3/developing/usage/search-schemas.md
index 7df680fb50..b0662d6e9f 100644
--- a/content/riak/kv/2.1.3/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.1.3/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.1.3/developing/data-types/), and [more](/riak/kv/2.1.3/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on
GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
@@ -48,7 +48,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -124,11 +124,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -176,21 +176,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -211,14 +211,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -263,7 +263,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.1.3/developing/usage/search.md b/content/riak/kv/2.1.3/developing/usage/search.md
index 842c3d70cd..4149f5d90d 100644
--- a/content/riak/kv/2.1.3/developing/usage/search.md
+++ b/content/riak/kv/2.1.3/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.1.3/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.1.3/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.1.3/developing/usage/secondary-indexes.md b/content/riak/kv/2.1.3/developing/usage/secondary-indexes.md
index fac8d2244f..f71140820a 100644
--- a/content/riak/kv/2.1.3/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.1.3/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.1.3/setup/planning/backend/memory
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.3/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.1.3/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.1.3/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.1.3/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.1.3/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.1.3/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.1.3/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.1.3/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.1.3/introduction.md b/content/riak/kv/2.1.3/introduction.md
index 8e0823dcc7..0aabf407e7 100644
--- a/content/riak/kv/2.1.3/introduction.md
+++ b/content/riak/kv/2.1.3/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.1.3/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.1.3/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.1.3/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.1.3/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.1.3/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.1.3/learn/concepts/strong-consistency.md b/content/riak/kv/2.1.3/learn/concepts/strong-consistency.md
index 932cd4cab2..f0abf64fec 100644
--- a/content/riak/kv/2.1.3/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.1.3/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.1.3/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.1.3/learn/glossary.md b/content/riak/kv/2.1.3/learn/glossary.md
index 79bf452837..9de4d01350 100644
--- a/content/riak/kv/2.1.3/learn/glossary.md
+++ b/content/riak/kv/2.1.3/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.1.3/learn/use-cases.md b/content/riak/kv/2.1.3/learn/use-cases.md
index fe323a7a07..e19a068e74 100644
--- a/content/riak/kv/2.1.3/learn/use-cases.md
+++ b/content/riak/kv/2.1.3/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.3/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.3/setup/installing/source/jvm.md b/content/riak/kv/2.1.3/setup/installing/source/jvm.md
index bf1f78dbbf..af08769e79 100644
--- a/content/riak/kv/2.1.3/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.1.3/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.1.3/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.1.3/setup/planning/backend/bitcask.md b/content/riak/kv/2.1.3/setup/planning/backend/bitcask.md
index d0eb9617d8..f3312c2bd5 100644
--- a/content/riak/kv/2.1.3/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.1.3/setup/planning/backend/bitcask.md
@@ -750,7 +750,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.1.3/setup/upgrading/checklist.md b/content/riak/kv/2.1.3/setup/upgrading/checklist.md
index 35fac0b547..4b39d1ab8a 100644
--- a/content/riak/kv/2.1.3/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.1.3/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.1.3/setup/upgrading/search.md b/content/riak/kv/2.1.3/setup/upgrading/search.md
index 036a431b63..6e49233beb 100644
--- a/content/riak/kv/2.1.3/setup/upgrading/search.md
+++ b/content/riak/kv/2.1.3/setup/upgrading/search.md
@@ -16,7 +16,7 @@ version_history:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -32,7 +32,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -72,7 +72,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -220,7 +220,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -244,7 +244,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.1.3/setup/upgrading/version.md b/content/riak/kv/2.1.3/setup/upgrading/version.md
index e55b9af0b9..5f70d4e95e 100644
--- a/content/riak/kv/2.1.3/setup/upgrading/version.md
+++ b/content/riak/kv/2.1.3/setup/upgrading/version.md
@@ -37,7 +37,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.1.3/introduction) like [data types](/riak/kv/2.1.3/developing/data-types) or the new [Riak Search](/riak/kv/2.1.3/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.1.3/introduction) like [data types](/riak/kv/2.1.3/developing/data-types) or the new [Riak search](/riak/kv/2.1.3/using/reference/search).
## Bucket Types
@@ -141,7 +141,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.1.3/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.1.3/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.1.3/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/2.1.3/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.1.3/developing/data-types)
@@ -209,7 +209,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.1.3/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.1.3/using/admin/riak-admin.md b/content/riak/kv/2.1.3/using/admin/riak-admin.md
index 6bd1c26a0d..299503e0cb 100644
--- a/content/riak/kv/2.1.3/using/admin/riak-admin.md
+++ b/content/riak/kv/2.1.3/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.1.3/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.1.3/using/cluster-operations/active-anti-entropy.md
index c262cc7164..62c80485fb 100644
--- a/content/riak/kv/2.1.3/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.1.3/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.1.3/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.1.3/using/cluster-operations/inspecting-node.md
index ab99a21e04..b13b788056 100644
--- a/content/riak/kv/2.1.3/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.1.3/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.1.3/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.1.3/using/cluster-operations/strong-consistency.md
index f3533afbff..1542d5c9f7 100644
--- a/content/riak/kv/2.1.3/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.1.3/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.1.3/using/reference/handoff.md b/content/riak/kv/2.1.3/using/reference/handoff.md
index 5d2aa0f04d..afb6608eeb 100644
--- a/content/riak/kv/2.1.3/using/reference/handoff.md
+++ b/content/riak/kv/2.1.3/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.1.3/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.1.3/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.1.3/using/reference/search.md b/content/riak/kv/2.1.3/using/reference/search.md
index 9c67db4b59..5ef21b03f0 100644
--- a/content/riak/kv/2.1.3/using/reference/search.md
+++ b/content/riak/kv/2.1.3/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.1.3/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.1.3/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.1.3/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.1.3/using/reference/secondary-indexes.md b/content/riak/kv/2.1.3/using/reference/secondary-indexes.md
index 1bbd238051..c0658e64e7 100644
--- a/content/riak/kv/2.1.3/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.1.3/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.1.3/developing/usage/bucket-types
[use ref strong consistency]: /riak/2.1.3/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.3/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.1.3/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.1.3/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.1.3/using/reference/statistics-monitoring.md b/content/riak/kv/2.1.3/using/reference/statistics-monitoring.md
index d7f52bf607..73adcb7fe7 100644
--- a/content/riak/kv/2.1.3/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.1.3/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.1.3/using/reference/strong-consistency.md b/content/riak/kv/2.1.3/using/reference/strong-consistency.md
index 069dc335a6..6ac1ec5f79 100644
--- a/content/riak/kv/2.1.3/using/reference/strong-consistency.md
+++ b/content/riak/kv/2.1.3/using/reference/strong-consistency.md
@@ -17,7 +17,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.1.3/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.1.3/using/repair-recovery/errors.md b/content/riak/kv/2.1.3/using/repair-recovery/errors.md
index 865181ab2f..baa85978ad 100644
--- a/content/riak/kv/2.1.3/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.1.3/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.1.3/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.1.3/using/repair-recovery/repairs.md b/content/riak/kv/2.1.3/using/repair-recovery/repairs.md
index 762031baa7..aea63265a6 100644
--- a/content/riak/kv/2.1.3/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.1.3/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.3/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.1.3/using/repair-recovery/secondary-indexes.md
index 6f650199ae..16830b70d0 100644
--- a/content/riak/kv/2.1.3/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.1.3/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.3/using/security/basics.md b/content/riak/kv/2.1.3/using/security/basics.md
index 3b596811bd..42abf489f5 100644
--- a/content/riak/kv/2.1.3/using/security/basics.md
+++ b/content/riak/kv/2.1.3/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.1.3/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.1.3/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.1.3/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.1.4/configuring/reference.md b/content/riak/kv/2.1.4/configuring/reference.md
index e1a7e52836..172216d141 100644
--- a/content/riak/kv/2.1.4/configuring/reference.md
+++ b/content/riak/kv/2.1.4/configuring/reference.md
@@ -273,7 +273,7 @@ stored
search.root_dir |
-The root directory for Riak Search, under which index data and
+ | The root directory for Riak search, under which index data and
configuration is stored. |
./data/yz |
@@ -1386,7 +1386,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1630,7 +1630,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2053,7 +2053,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.1.4/configuring/search.md b/content/riak/kv/2.1.4/configuring/search.md
index 56e9501113..733e0dfdd6 100644
--- a/content/riak/kv/2.1.4/configuring/search.md
+++ b/content/riak/kv/2.1.4/configuring/search.md
@@ -26,9 +26,9 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Riak Search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Riak search Settings](http://docs.basho.com/riak/1.4.8/ops/advanced/configs/search/).
This document covers Riak's Search subsystem from an
operational perspective. If you are looking for more developer-focused
@@ -41,7 +41,7 @@ docs, we recommend the following:
## Enabling Riak Search
-Although Riak Search is integrated into Riak and requires no special
+Although Riak search is integrated into Riak and requires no special
installation, it is not enabled by default. You must enable it in every
node's [configuration files][config reference] as follows:
@@ -68,7 +68,7 @@ optional. A list of these parameters can also be found in our
Field | Default | Valid values | Description
:-----|:--------|:-------------|:-----------
`search` | `off` | `on` or `off` | Enable or disable Search
-`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]
+`search.anti_entropy.data_dir` | `./data/yz_anti_entropy` | Directory | The directory in which Riak search stores files related to [active anti-entropy][glossary aae]
`search.root_dir` | `./data/yz` | Directory | The root directory in which index data and configuration is stored
`search.solr.start_timeout` | `30s` | Integer with time units (eg. 2m) | How long Riak will wait for Solr to start (attempts twice before shutdown). Values lower than 1s will be rounded up to 1s.
`search.solr.port` | `8093` | Integer | The port number to which Solr binds (note: binds on every interface)
@@ -81,7 +81,7 @@ cause Solr to require more time to start.
## Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project, Yokozuna, manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.1.4/configuring/strong-consistency.md b/content/riak/kv/2.1.4/configuring/strong-consistency.md
index aee7d66d70..eb5f507e08 100644
--- a/content/riak/kv/2.1.4/configuring/strong-consistency.md
+++ b/content/riak/kv/2.1.4/configuring/strong-consistency.md
@@ -41,7 +41,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.1.4/developing/api/http/delete-search-index.md b/content/riak/kv/2.1.4/developing/api/http/delete-search-index.md
index 3816e01b01..fe902fa7ef 100644
--- a/content/riak/kv/2.1.4/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.1.4/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.4/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.1.4/developing/api/http/fetch-search-index.md b/content/riak/kv/2.1.4/developing/api/http/fetch-search-index.md
index 3535292070..02ede5da7a 100644
--- a/content/riak/kv/2.1.4/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.1.4/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.4/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.1.4/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.1.4/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.4/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.1.4/developing/api/http/fetch-search-schema.md
index e73c81d86c..c499149818 100644
--- a/content/riak/kv/2.1.4/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.1.4/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.1.4/developing/api/http/search-index-info.md b/content/riak/kv/2.1.4/developing/api/http/search-index-info.md
index 68899d541a..71580f4361 100644
--- a/content/riak/kv/2.1.4/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.1.4/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.4/developing/api/http/store-search-index.md b/content/riak/kv/2.1.4/developing/api/http/store-search-index.md
index 9ee1933341..82dbcb724f 100644
--- a/content/riak/kv/2.1.4/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.1.4/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.4/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.1.4/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.1.4/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.1.4/developing/api/http/store-search-schema.md b/content/riak/kv/2.1.4/developing/api/http/store-search-schema.md
index 3adbae063b..88caba661f 100644
--- a/content/riak/kv/2.1.4/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.1.4/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-index-get.md
index 3b8b3a92c1..41cfdf70ee 100644
--- a/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.4/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-schema-get.md
index 3dd8bf545d..70940ea5c6 100644
--- a/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.1.4/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.1.4/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.1.4/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.1.4/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.1.4/developing/app-guide.md b/content/riak/kv/2.1.4/developing/app-guide.md
index 20b684cc34..766ab42b57 100644
--- a/content/riak/kv/2.1.4/developing/app-guide.md
+++ b/content/riak/kv/2.1.4/developing/app-guide.md
@@ -149,22 +149,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -216,7 +216,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -284,13 +284,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -301,7 +301,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -329,7 +329,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce.md
index bbd4fe9694..26a573a421 100644
--- a/content/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce.md
@@ -74,7 +74,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.1.4/developing/app-guide/strong-consistency.md b/content/riak/kv/2.1.4/developing/app-guide/strong-consistency.md
index c75871a46d..d0b1392d5e 100644
--- a/content/riak/kv/2.1.4/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.1.4/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.1.4/developing/data-modeling.md b/content/riak/kv/2.1.4/developing/data-modeling.md
index 75de20b928..67278de26c 100644
--- a/content/riak/kv/2.1.4/developing/data-modeling.md
+++ b/content/riak/kv/2.1.4/developing/data-modeling.md
@@ -141,7 +141,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -225,7 +225,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.1.4/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.1.4/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -311,7 +311,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.4/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.1.4/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.4/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.1.4/developing/usage/search/) or [using secondary indexes](/riak/kv/2.1.4/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -330,7 +330,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.1.4/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.1.4/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.1.4/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.4/developing/data-types.md b/content/riak/kv/2.1.4/developing/data-types.md
index f700a7ef9f..0e932af963 100644
--- a/content/riak/kv/2.1.4/developing/data-types.md
+++ b/content/riak/kv/2.1.4/developing/data-types.md
@@ -259,7 +259,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.1.4/developing/usage.md b/content/riak/kv/2.1.4/developing/usage.md
index 2a0a210814..4876bacfca 100644
--- a/content/riak/kv/2.1.4/developing/usage.md
+++ b/content/riak/kv/2.1.4/developing/usage.md
@@ -111,7 +111,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.1.4/developing/usage/custom-extractors.md b/content/riak/kv/2.1.4/developing/usage/custom-extractors.md
index b94127dbb5..39e1b01f12 100644
--- a/content/riak/kv/2.1.4/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.1.4/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.1.4/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.1.4/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.1.4/developing/usage/document-store.md b/content/riak/kv/2.1.4/developing/usage/document-store.md
index 8f97c8b798..663f0cafeb 100644
--- a/content/riak/kv/2.1.4/developing/usage/document-store.md
+++ b/content/riak/kv/2.1.4/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.1.4/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.4/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.1.4/developing/usage/search/) and [Riak Data Types](/riak/kv/2.1.4/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.1.4/developing/data-types/maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.1.4/developing/data-types/maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.1.4/developing/usage/search-schemas.md b/content/riak/kv/2.1.4/developing/usage/search-schemas.md
index f8f949b9a3..7391660523 100644
--- a/content/riak/kv/2.1.4/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.1.4/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.1.4/developing/data-types/), and [more](/riak/kv/2.1.4/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on
GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
@@ -48,7 +48,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -124,11 +124,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -176,21 +176,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -211,14 +211,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -263,7 +263,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.1.4/developing/usage/search.md b/content/riak/kv/2.1.4/developing/usage/search.md
index 1453b3a16d..05835fc76d 100644
--- a/content/riak/kv/2.1.4/developing/usage/search.md
+++ b/content/riak/kv/2.1.4/developing/usage/search.md
@@ -19,9 +19,9 @@ aliases:
## Setup
-Riak Search 2.0 is an integration of Solr (for indexing and querying)
+Riak search 2.0 is an integration of Solr (for indexing and querying)
and Riak (for storage and distribution). There are a few points of
-interest that a user of Riak Search will have to keep in mind in order
+interest that a user of Riak search will have to keep in mind in order
to properly store and later query for values.
1. **Schemas** explain to Solr how to index fields
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.1.4/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.1.4/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.1.4/developing/usage/secondary-indexes.md b/content/riak/kv/2.1.4/developing/usage/secondary-indexes.md
index 474bcdef7c..fbf6c607b5 100644
--- a/content/riak/kv/2.1.4/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.1.4/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.1.4/setup/planning/backend/memory
[use ref strong consistency]: /riak/kv/2.1.4/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.4/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.1.4/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.1.4/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.1.4/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.1.4/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.1.4/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.1.4/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.1.4/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.1.4/introduction.md b/content/riak/kv/2.1.4/introduction.md
index 22a4d6175d..20d5d1b19d 100644
--- a/content/riak/kv/2.1.4/introduction.md
+++ b/content/riak/kv/2.1.4/introduction.md
@@ -30,10 +30,10 @@ that all of the new features listed below are optional:
* **Riak Data Types** --- Riak's new CRDT-based [Data Types](/riak/kv/2.1.4/developing/data-types) can
simplify modeling data in Riak, but are only used in buckets
explicitly configured to use them.
-* **Strong Consistency, Riak Security, and the New Riak Search** ---
+* **Strong Consistency, Riak Security, and the New Riak search** ---
These are subsystems in Riak that must be explicitly turned on to
work. If not turned on, they will have no impact on performance.
- Furthermore, the older Riak Search will continue to be included with
+ Furthermore, the older Riak search will continue to be included with
Riak.
* **Security** --- [Authentication and authorization](/riak/kv/2.1.4/using/security/basics) can be enabled
or disabled at any time.
@@ -98,21 +98,21 @@ Brown](https://github.com/russelldb).
## Riak Search 2.0 (codename: Yokozuna)
-Riak Search 2.0 is a complete, top-to-bottom replacement for Riak
-Search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
+Riak search 2.0 is a complete, top-to-bottom replacement for Riak
+search, integrating Riak with [Apache Solr](https://lucene.apache.org/solr/)'s full-text search capabilities and supporting Solr's client query APIs.
#### Relevant Docs
* [Using Search](/riak/kv/2.1.4/developing/usage/search) provides an overview of how to use the new
- Riak Search.
+ Riak search.
* [Search Schema](/riak/kv/2.1.4/developing/usage/search-schemas) shows you how to create and manage custom search
schemas.
* [Search Details](/riak/kv/2.1.4/using/reference/search) provides an in-depth look at the design
- considerations that went into the new Riak Search.
+ considerations that went into the new Riak search.
#### Video
-[Riak Search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
+[Riak search 2.0](https://www.youtube.com/watch?v=-c1eynVLNMo) by Basho
engineer and documentarian [Eric Redmond](https://github.com/coderoshi).
## Strong Consistency
@@ -145,7 +145,7 @@ check out [part 2](https://www.youtube.com/watch?v=gXJxbhca5Xg).
Riak 2.0 enables you to manage:
* **Authorization** to perform specific tasks, from GETs and PUTs to
-running MapReduce jobs to administering Riak Search.
+running MapReduce jobs to administering Riak search.
* **Authentication** of Riak clients seeking access to Riak.
@@ -317,7 +317,7 @@ another. Incompatibilities are marked with a
-**†** The data indexed by Riak Search can be
+**†** The data indexed by Riak search can be
stored in a strongly consistent fashion, but indexes themselves are
eventually consistent
**‡** If secondary indexes are attached to an
diff --git a/content/riak/kv/2.1.4/learn/concepts/strong-consistency.md b/content/riak/kv/2.1.4/learn/concepts/strong-consistency.md
index 323e9959ec..bf7942d0a3 100644
--- a/content/riak/kv/2.1.4/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.1.4/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.1.4/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.1.4/learn/glossary.md b/content/riak/kv/2.1.4/learn/glossary.md
index 2cf117b736..7f6e233e05 100644
--- a/content/riak/kv/2.1.4/learn/glossary.md
+++ b/content/riak/kv/2.1.4/learn/glossary.md
@@ -278,7 +278,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.1.4/learn/use-cases.md b/content/riak/kv/2.1.4/learn/use-cases.md
index fc6353b8f6..47cf8f2698 100644
--- a/content/riak/kv/2.1.4/learn/use-cases.md
+++ b/content/riak/kv/2.1.4/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.1.4/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.1.4/setup/installing/source/jvm.md b/content/riak/kv/2.1.4/setup/installing/source/jvm.md
index 9554c697f9..db8d33ced9 100644
--- a/content/riak/kv/2.1.4/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.1.4/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.1.4/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.1.4/setup/planning/backend/bitcask.md b/content/riak/kv/2.1.4/setup/planning/backend/bitcask.md
index 9e036fc31a..1dd1b79fb1 100644
--- a/content/riak/kv/2.1.4/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.1.4/setup/planning/backend/bitcask.md
@@ -750,7 +750,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.1.4/setup/upgrading/checklist.md b/content/riak/kv/2.1.4/setup/upgrading/checklist.md
index ea0bd4a4ab..4160d7ddbe 100644
--- a/content/riak/kv/2.1.4/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.1.4/setup/upgrading/checklist.md
@@ -89,7 +89,7 @@ We have compiled these considerations and questions into separate categories for
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.1.4/setup/upgrading/search.md b/content/riak/kv/2.1.4/setup/upgrading/search.md
index 1ad8219f26..82e2321ac2 100644
--- a/content/riak/kv/2.1.4/setup/upgrading/search.md
+++ b/content/riak/kv/2.1.4/setup/upgrading/search.md
@@ -19,7 +19,7 @@ aliases:
If you're using Search in a version of Riak prior to 2.0 (1.3.0 to
1.4.x), you should follow these steps to migrate your search indexes
-from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak Search is now deprecated
+from the legacy `merge_index` to the new Solr-backed ([Yokozuna](../../../using/reference/search) indexes. The legacy version of Riak search is now deprecated
and does not support most new 2.0 features, i.e. no [Riak Data Types](../../../developing/data-types), [bucket types](../../../using/reference/bucket-types), [strong consistency](../../../using/reference/strong-consistency), or [security](../../../using/security/)), so we highly recommend that you migrate.
And please note that the legacy `merge_index`-based search (aka legacy
@@ -35,7 +35,7 @@ these steps at a time when your cluster is relatively light on traffic,
i.e. _not_ the week before Christmas.
The main goal of a live migration is to stand up indexes in the new Riak
-Search that parallel the existing ones in legacy. New writes add entries
+search that parallel the existing ones in legacy. New writes add entries
to both indexes while AAE adds entries in the new indexes for existing
data.
@@ -75,7 +75,7 @@ algorithm is bad at getting rid of large index files.
## Steps to Upgrading
1. First, you'll perform a normal [rolling upgrade](../cluster).
- As you upgrade, enable `yokozuna` (the new Riak Search library) on
+ As you upgrade, enable `yokozuna` (the new Riak search library) on
each node. If you're still using `app.config` it's called `yokozuna`.
If you've chosen to upgrade to the new `riak.conf` config option, it's
called `search`.
@@ -223,7 +223,7 @@ have occurred on every node.
node you know that AAE has brought all new indexes up to date.
7. Next, call the following command that will give HTTP and PB query
-control to the new Riak Search.
+control to the new Riak search.
```curl
riak-admin search switch-to-new-search
@@ -247,7 +247,7 @@ buckets. This deactivates legacy Search.
-d '{"props":{"search": false}}'
```
-9. Disable the Riak Search process on each node by setting `riak_search`
+9. Disable the Riak search process on each node by setting `riak_search`
`enabled` to `false`.
```appconfig
diff --git a/content/riak/kv/2.1.4/setup/upgrading/version.md b/content/riak/kv/2.1.4/setup/upgrading/version.md
index 98fd745619..b34cfc7853 100644
--- a/content/riak/kv/2.1.4/setup/upgrading/version.md
+++ b/content/riak/kv/2.1.4/setup/upgrading/version.md
@@ -37,7 +37,7 @@ was built with those features in mind. There are official
While we strongly recommend using the newest versions of these clients,
older versions will still work with Riak 2.0, with the drawback that
-those older clients will not able to take advantage of [new features](/riak/kv/2.1.4/introduction) like [data types](/riak/kv/2.1.4/developing/data-types) or the new [Riak Search](/riak/kv/2.1.4/using/reference/search).
+those older clients will not able to take advantage of [new features](/riak/kv/2.1.4/introduction) like [data types](/riak/kv/2.1.4/developing/data-types) or the new [Riak search](/riak/kv/2.1.4/using/reference/search).
## Bucket Types
@@ -141,7 +141,7 @@ If you decide to upgrade to version 2.0, you can still downgrade your
cluster to an earlier version of Riak if you wish, _unless_ you perform
one of the following actions in your cluster:
-* Index data to be used in conjunction with the new [Riak Search](/riak/kv/2.1.4/using/reference/search).
+* Index data to be used in conjunction with the new [Riak search](/riak/kv/2.1.4/using/reference/search).
* Create _and_ activate one or more [bucket types](/riak/kv/2.1.4/using/reference/bucket-types/). By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types:
- [Strong consistency](/riak/kv/2.1.4/using/reference/strong-consistency)
- [Riak Data Types](/riak/kv/2.1.4/developing/data-types)
@@ -209,7 +209,7 @@ default to a value of `15`, which can cause problems in some clusters.
## Upgrading Search
-Information on upgrading Riak Search to 2.0 can be found in our
+Information on upgrading Riak search to 2.0 can be found in our
[Search upgrade guide](/riak/kv/2.1.4/setup/upgrading/search).
## Migrating from Short Names
diff --git a/content/riak/kv/2.1.4/using/admin/riak-admin.md b/content/riak/kv/2.1.4/using/admin/riak-admin.md
index e6288396ba..6ce74ae421 100644
--- a/content/riak/kv/2.1.4/using/admin/riak-admin.md
+++ b/content/riak/kv/2.1.4/using/admin/riak-admin.md
@@ -590,7 +590,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -639,7 +639,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.1.4/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.1.4/using/cluster-operations/active-anti-entropy.md
index f85eac31e1..96edcc22b8 100644
--- a/content/riak/kv/2.1.4/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.1.4/using/cluster-operations/active-anti-entropy.md
@@ -262,9 +262,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.1.4/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.1.4/using/cluster-operations/inspecting-node.md
index 251b5c1328..52933316ca 100644
--- a/content/riak/kv/2.1.4/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.1.4/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.1.4/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.1.4/using/cluster-operations/strong-consistency.md
index 622b214fe9..be444683f1 100644
--- a/content/riak/kv/2.1.4/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.1.4/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.1.4/using/reference/handoff.md b/content/riak/kv/2.1.4/using/reference/handoff.md
index 33a013c360..723fc41325 100644
--- a/content/riak/kv/2.1.4/using/reference/handoff.md
+++ b/content/riak/kv/2.1.4/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.1.4/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.1.4/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.1.4/using/reference/search.md b/content/riak/kv/2.1.4/using/reference/search.md
index 2c833220ca..2651c12b77 100644
--- a/content/riak/kv/2.1.4/using/reference/search.md
+++ b/content/riak/kv/2.1.4/using/reference/search.md
@@ -19,14 +19,14 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-The project that implements Riak Search is codenamed Yokozuna. This is a
+The project that implements Riak search is codenamed Yokozuna. This is a
more detailed overview of the concepts and reasons behind the design of
Yokozuna, for those interested. If you're simply looking to use Riak
-Search, you should check out the [Using Search](/riak/kv/2.1.4/developing/usage/search) document.
+search, you should check out the [Using Search](/riak/kv/2.1.4/developing/usage/search) document.

@@ -35,30 +35,30 @@ Search, you should check out the [Using Search](/riak/kv/2.1.4/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -75,13 +75,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -91,7 +91,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -105,11 +105,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -141,7 +141,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
diff --git a/content/riak/kv/2.1.4/using/reference/secondary-indexes.md b/content/riak/kv/2.1.4/using/reference/secondary-indexes.md
index 92e47757bd..b79d979cdd 100644
--- a/content/riak/kv/2.1.4/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.1.4/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.1.4/developing/usage/bucket-types
[use ref strong consistency]: /riak/kv/2.1.4/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.1.4/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.1.4/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.1.4/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.1.4/using/reference/statistics-monitoring.md b/content/riak/kv/2.1.4/using/reference/statistics-monitoring.md
index 0d4505870e..9fb1cb5d17 100644
--- a/content/riak/kv/2.1.4/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.1.4/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.1.4/using/reference/strong-consistency.md b/content/riak/kv/2.1.4/using/reference/strong-consistency.md
index 29e1bf3083..f5eb8e73e6 100644
--- a/content/riak/kv/2.1.4/using/reference/strong-consistency.md
+++ b/content/riak/kv/2.1.4/using/reference/strong-consistency.md
@@ -17,7 +17,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.1.4/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.1.4/using/repair-recovery/errors.md b/content/riak/kv/2.1.4/using/repair-recovery/errors.md
index 9523753414..5642ee6e8b 100644
--- a/content/riak/kv/2.1.4/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.1.4/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.1.4/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.1.4/using/repair-recovery/repairs.md b/content/riak/kv/2.1.4/using/repair-recovery/repairs.md
index 74609b55a2..619924e4c1 100644
--- a/content/riak/kv/2.1.4/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.1.4/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.4/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.1.4/using/repair-recovery/secondary-indexes.md
index 9184a47c7b..30f98c1cae 100644
--- a/content/riak/kv/2.1.4/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.1.4/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.1.4/using/security/basics.md b/content/riak/kv/2.1.4/using/security/basics.md
index 07a6e52e9a..7b71ddab58 100644
--- a/content/riak/kv/2.1.4/using/security/basics.md
+++ b/content/riak/kv/2.1.4/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.1.4/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.1.4/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.1.4/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/kv/2.2.0/configuring/reference.md b/content/riak/kv/2.2.0/configuring/reference.md
index 61024540f8..aefc4c13b7 100644
--- a/content/riak/kv/2.2.0/configuring/reference.md
+++ b/content/riak/kv/2.2.0/configuring/reference.md
@@ -1343,7 +1343,7 @@ Configurable parameters for intra-cluster, i.e. inter-node, [handoff][cluster op
handoff.max_rejects |
The maximum number of times that a secondary system within Riak,
-such as Riak Search, can block handoff
+such as Riak search, can block handoff
of primary key/value data. The approximate maximum duration that a vnode
can be blocked can be determined by multiplying this setting by
vnode_management_timer . If you want to prevent handoff from
@@ -1587,7 +1587,7 @@ if the JMX server crashes. |
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak's strong consistency feature has a variety of tunable parameters
that allow you to enable and disable strong consistency, modify the
@@ -2010,7 +2010,7 @@ only in Riak Enterprise 2.0 and later.
#### Upgrading Riak Search with `advanced.config`
-If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak Search][use ref search]\(codename Yokozuna), you will need to enable
+If you are upgrading to Riak 2.x and wish to upgrade to the new [Riak search][use ref search]\(codename Yokozuna), you will need to enable
legacy Search while the upgrade is underway. You can add the following
snippet to your `advanced.config` configuration to do so:
diff --git a/content/riak/kv/2.2.0/configuring/search.md b/content/riak/kv/2.2.0/configuring/search.md
index fe77bf94d2..41ce2216c2 100644
--- a/content/riak/kv/2.2.0/configuring/search.md
+++ b/content/riak/kv/2.2.0/configuring/search.md
@@ -26,12 +26,12 @@ aliases:
[security index]: /riak/kv/2.2.0/using/security/
-This page covers how to use Riak Search (with
+This page covers how to use Riak search (with
[Solr](http://lucene.apache.org/solr/) integration).
For a simple reference of the available configs and their defaults, see the [configuration reference][config reference#search].
-If you are looking to develop on or with Riak Search, take a look at:
+If you are looking to develop on or with Riak search, take a look at:
* [Using Search][usage search]
* [Search Schema][usage search schema]
@@ -43,7 +43,7 @@ If you are looking to develop on or with Riak Search, take a look at:
We'll be walking through:
1. [Prequisites](#prerequisites)
-2. [Enable Riak Search](#enabling-riak-search)
+2. [Enable Riak search](#enabling-riak-search)
3. [Search Configuration Settings](#search-config-settings)
4. [Additional Solr Information](#more-on-solr)
@@ -60,7 +60,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Enabling Riak Search
-Riak Search is not enabled by default, so you must enable it in every
+Riak search is not enabled by default, so you must enable it in every
node's [configuration file][config reference] as follows:
```riak.conf
@@ -70,7 +70,7 @@ search = on
## Search Config Settings
-You will find all the Riak Search configuration settings in riak.conf. Setting `search` to `on` is required, but other search settings are optional. A handy reference list of these parameters can be found in our [configuration files][config reference#search] documentation.
+You will find all the Riak search configuration settings in riak.conf. Setting `search` to `on` is required, but other search settings are optional. A handy reference list of these parameters can be found in our [configuration files][config reference#search] documentation.
### `search`
@@ -80,7 +80,7 @@ Valid values: `on` or `off`
### `search.anti_entropy.data_dir`
-The directory in which Riak Search stores files related to [active anti-entropy][glossary aae]; defaults to `./data/yz_anti_entropy`.
+The directory in which Riak search stores files related to [active anti-entropy][glossary aae]; defaults to `./data/yz_anti_entropy`.
Valid values: a directory
@@ -198,7 +198,7 @@ Valid valus: Integer
The queue high water mark; defaults to `1000`.
-If the total number of queued messages in a Solrq worker instance exceed this limit, then the calling vnode will be blocked until the total number falls below this limit. This parameter exercises flow control between Riak KV and the Riak Search batching subsystem, if writes into Solr start to fall behind.
+If the total number of queued messages in a Solrq worker instance exceed this limit, then the calling vnode will be blocked until the total number falls below this limit. This parameter exercises flow control between Riak KV and the Riak search batching subsystem, if writes into Solr start to fall behind.
Valid values: Integer
@@ -250,7 +250,7 @@ Valid values: Integer with time units (e.g. 2m)
## More on Solr
### Solr JVM and Ports
-Riak Search runs one Solr process per node to manage its indexing and
+Riak search runs one Solr process per node to manage its indexing and
search functionality. While the underlying project manages
index distribution, node coverage for queries, active anti-entropy
(AAE), and JVM process management, you should provide plenty of RAM and diskspace for running both Riak and the JVM running Solr. We recommend a minimum of 6GB of RAM per node.
diff --git a/content/riak/kv/2.2.0/configuring/strong-consistency.md b/content/riak/kv/2.2.0/configuring/strong-consistency.md
index 3c58a254e2..79754743be 100644
--- a/content/riak/kv/2.2.0/configuring/strong-consistency.md
+++ b/content/riak/kv/2.2.0/configuring/strong-consistency.md
@@ -38,7 +38,7 @@ toc: true
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
This document provides information on configuring and monitoring a Riak
cluster's optional strong consistency subsystem. Documentation for
diff --git a/content/riak/kv/2.2.0/developing/api/http/delete-search-index.md b/content/riak/kv/2.2.0/developing/api/http/delete-search-index.md
index f9832889b8..0cb63d0b43 100644
--- a/content/riak/kv/2.2.0/developing/api/http/delete-search-index.md
+++ b/content/riak/kv/2.2.0/developing/api/http/delete-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.2.0/dev/references/http/delete-search-index
---
-Deletes a Riak Search index.
+Deletes a Riak search index.
## Request
diff --git a/content/riak/kv/2.2.0/developing/api/http/fetch-search-index.md b/content/riak/kv/2.2.0/developing/api/http/fetch-search-index.md
index c78df42599..95d23b4209 100644
--- a/content/riak/kv/2.2.0/developing/api/http/fetch-search-index.md
+++ b/content/riak/kv/2.2.0/developing/api/http/fetch-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.2.0/dev/references/http/fetch-search-index
---
-Retrieves information about a Riak Search [index](/riak/kv/2.2.0/developing/usage/search/#simple-setup).
+Retrieves information about a Riak search [index](/riak/kv/2.2.0/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.2.0/developing/api/http/fetch-search-schema.md b/content/riak/kv/2.2.0/developing/api/http/fetch-search-schema.md
index ecd2fc74f6..06e19c89f0 100644
--- a/content/riak/kv/2.2.0/developing/api/http/fetch-search-schema.md
+++ b/content/riak/kv/2.2.0/developing/api/http/fetch-search-schema.md
@@ -35,4 +35,4 @@ GET /search/schema/
## Response
If the schema is found, Riak will return the contents of the schema as
-XML (all Riak Search schemas are XML).
+XML (all Riak search schemas are XML).
diff --git a/content/riak/kv/2.2.0/developing/api/http/search-index-info.md b/content/riak/kv/2.2.0/developing/api/http/search-index-info.md
index f0f7daf45b..74480d7c4d 100644
--- a/content/riak/kv/2.2.0/developing/api/http/search-index-info.md
+++ b/content/riak/kv/2.2.0/developing/api/http/search-index-info.md
@@ -47,6 +47,6 @@ Below is the example output if there is one Search index, called
#### Typical Error Codes
-* `404 Object Not Found` --- Typically returned if Riak Search is not
+* `404 Object Not Found` --- Typically returned if Riak search is not
currently enabled on the node
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.2.0/developing/api/http/store-search-index.md b/content/riak/kv/2.2.0/developing/api/http/store-search-index.md
index 0355647835..46783e5f12 100644
--- a/content/riak/kv/2.2.0/developing/api/http/store-search-index.md
+++ b/content/riak/kv/2.2.0/developing/api/http/store-search-index.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.2.0/dev/references/http/store-search-index
---
-Creates a new Riak Search [index](/riak/kv/2.2.0/developing/usage/search/#simple-setup).
+Creates a new Riak search [index](/riak/kv/2.2.0/developing/usage/search/#simple-setup).
## Request
diff --git a/content/riak/kv/2.2.0/developing/api/http/store-search-schema.md b/content/riak/kv/2.2.0/developing/api/http/store-search-schema.md
index ef7db5d322..f9251920d2 100644
--- a/content/riak/kv/2.2.0/developing/api/http/store-search-schema.md
+++ b/content/riak/kv/2.2.0/developing/api/http/store-search-schema.md
@@ -44,7 +44,7 @@ curl -XPUT http://localhost:8098/search/schema/my_custom_schema \
* `400 Bad Request` --- The schema cannot be created because there is
something wrong with the schema itself, e.g. an XML formatting error
- that makes Riak Search unable to parse the schema
+ that makes Riak search unable to parse the schema
* `409 Conflict` --- The schema cannot be created because there is
already a schema with that name
* `503 Service Unavailable` --- The request timed out internally
diff --git a/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-index-get.md b/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-index-get.md
index 835504409d..e36be501da 100644
--- a/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-index-get.md
+++ b/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-index-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.2.0/dev/references/protocol-buffers/yz-index-get
---
-Retrieve a search index from Riak Search.
+Retrieve a search index from Riak search.
## Request
diff --git a/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-schema-get.md b/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-schema-get.md
index 032322a718..532846820b 100644
--- a/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-schema-get.md
+++ b/content/riak/kv/2.2.0/developing/api/protocol-buffers/yz-schema-get.md
@@ -15,7 +15,7 @@ aliases:
- /riak/kv/2.2.0/dev/references/protocol-buffers/yz-schema-get
---
-Fetch a [search schema](/riak/kv/2.2.0/developing/usage/search-schemas) from Riak Search.
+Fetch a [search schema](/riak/kv/2.2.0/developing/usage/search-schemas) from Riak search.
## Request
diff --git a/content/riak/kv/2.2.0/developing/app-guide.md b/content/riak/kv/2.2.0/developing/app-guide.md
index 36d0eba99c..49540769a6 100644
--- a/content/riak/kv/2.2.0/developing/app-guide.md
+++ b/content/riak/kv/2.2.0/developing/app-guide.md
@@ -147,22 +147,22 @@ well as relevant links to Basho documentation.
## Search
-Riak Search provides you with [Apache
+Riak search provides you with [Apache
Solr](http://lucene.apache.org/solr/)-powered full-text indexing and
querying on top of the scalability, fault tolerance, and operational
-simplicity of Riak. Our motto for Riak Search: **Write it like Riak.
+simplicity of Riak. Our motto for Riak search: **Write it like Riak.
Query it like Solr**. That is, you can store objects in Riak [like normal][usage create objects] and run full-text queries on those objects later on
using the Solr API.
-* [Using Search][usage search] --- Getting started with Riak Search
+* [Using Search][usage search] --- Getting started with Riak search
* [Search Details][use ref search] --- A detailed overview of the concepts and design
- consideration behind Riak Search
+ consideration behind Riak search
* [Search Schema][usage search schema] --- How to create custom schemas for extracting data
- from Riak Search
+ from Riak search
### When to Use Search
-* **When you need a rich querying API** --- Riak Search gives you access
+* **When you need a rich querying API** --- Riak search gives you access
to the entirety of [Solr](http://lucene.apache.org/solr/)'s extremely
broad API, which enables you to query on the basis of wildcards,
strings, booleans, geolocation, ranges, language-specific fulltext,
@@ -214,7 +214,7 @@ own.
> **Note**:
>
-> Riak Data Types can be used in conjunction with Riak Search,
+> Riak Data Types can be used in conjunction with Riak search,
meaning that the data stored in counters, sets, and maps can be indexed
and searched just like any other data in Riak. Documentation on Data
Types and Search is coming soon.
@@ -278,13 +278,13 @@ or you can write and run your own MapReduce jobs in
### When Not to Use MapReduce
* **When another Riak feature will do** --- Before even considering
- using MapReduce, you should thoroughly investigate [Riak Search][usage search] or [secondary indexes][usage 2i] as possible
+ using MapReduce, you should thoroughly investigate [Riak search][usage search] or [secondary indexes][usage 2i] as possible
solutions to your needs.
In general, you should not think of MapReduce as, for example, Hadoop
within Riak. While it can be useful for certain types of
non-primary-key-based queries, it is neither a "Big Data" processing
-tool nor an indexing mechanism nor a replacement for [Riak Search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
+tool nor an indexing mechanism nor a replacement for [Riak search][usage search]. If you do need a tool like Hadoop or Apache Spark, you should
consider using Riak in conjunction with a more suitable data processing
tool.
@@ -295,7 +295,7 @@ following problem: how do I know which keys I should look for? Secondary
indexes (2i) provide a solution to this problem, enabling you to tag
objects with either binary or integer metadata and then query Riak for
all of the keys that share specific tags. 2i is especially useful if
-you're storing binary data that is opaque to features like [Riak Search][usage search].
+you're storing binary data that is opaque to features like [Riak search][usage search].
* [Using Secondary Indexes][usage 2i] --- A general guide to using 2i, along
with code samples and information on 2i features like pagination,
@@ -323,7 +323,7 @@ you're storing binary data that is opaque to features like [Riak Search][usage s
One thing to always bear in mind is that Riak enables you to mix and
match a wide variety of approaches in a single cluster. You can use
basic CRUD operations for some of your data, index some of your data to
-be queried by Riak Search, use Riak Data Types for another subset, etc.
+be queried by Riak search, use Riak Data Types for another subset, etc.
You are always free to use a wide array of Riak features---or you can
use none at all and stick to key/value operations.
diff --git a/content/riak/kv/2.2.0/developing/app-guide/advanced-mapreduce.md b/content/riak/kv/2.2.0/developing/app-guide/advanced-mapreduce.md
index 1f5ccd089c..f8e3666b0e 100644
--- a/content/riak/kv/2.2.0/developing/app-guide/advanced-mapreduce.md
+++ b/content/riak/kv/2.2.0/developing/app-guide/advanced-mapreduce.md
@@ -77,7 +77,7 @@ MapReduce should generally be treated as a fallback rather than a
standard part of an application. There are often ways to model data
such that dynamic queries become single key retrievals, which are
dramatically faster and more reliable in Riak, and tools such as Riak
-Search and 2i are simpler to use and may place less strain on a
+search and 2i are simpler to use and may place less strain on a
cluster.
### R=1
diff --git a/content/riak/kv/2.2.0/developing/app-guide/strong-consistency.md b/content/riak/kv/2.2.0/developing/app-guide/strong-consistency.md
index efa696144c..99fc4943ed 100644
--- a/content/riak/kv/2.2.0/developing/app-guide/strong-consistency.md
+++ b/content/riak/kv/2.2.0/developing/app-guide/strong-consistency.md
@@ -37,7 +37,7 @@ aliases:
> **Please Note:**
>
-> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+> Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
In versions 2.0 and later, Riak allows you to create buckets that
provide [strong consistency][use ref strong consistency] guarantees for the data stored within
diff --git a/content/riak/kv/2.2.0/developing/data-modeling.md b/content/riak/kv/2.2.0/developing/data-modeling.md
index 80d92a02e3..e9d25c25f1 100644
--- a/content/riak/kv/2.2.0/developing/data-modeling.md
+++ b/content/riak/kv/2.2.0/developing/data-modeling.md
@@ -140,7 +140,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -224,7 +224,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search](/riak/kv/2.2.0/developing/usage/search/) to index the JSON
+indexes or consider using [Riak search](/riak/kv/2.2.0/developing/usage/search/) to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -310,7 +310,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.2.0/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search](/riak/kv/2.2.0/developing/usage/search/) or [using secondary indexes](/riak/kv/2.2.0/developing/usage/secondary-indexes/).
+[using Riak search](/riak/kv/2.2.0/developing/usage/search/) or [using secondary indexes](/riak/kv/2.2.0/developing/usage/secondary-indexes/).
### Articles et al Complex Case
@@ -329,7 +329,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search](/riak/kv/2.2.0/developing/usage/search/) is recommended for use cases
+key/value pairs. [Riak search](/riak/kv/2.2.0/developing/usage/search/) is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes](/riak/kv/2.2.0/developing/usage/secondary-indexes/) \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.2.0/developing/data-types.md b/content/riak/kv/2.2.0/developing/data-types.md
index 287e5f7335..630e2be219 100644
--- a/content/riak/kv/2.2.0/developing/data-types.md
+++ b/content/riak/kv/2.2.0/developing/data-types.md
@@ -266,7 +266,7 @@ All the examples use the bucket type names from above (`counters`, `sets`, and `
Riak data types can be searched like any other object, but with the
added benefit that your data type is indexed as a different type by Solr,
-the search platform behind Riak Search.
+the search platform behind Riak search.
In our Search documentation we offer a [full tutorial](../usage/searching-data-types) as well as a variety of [examples](../usage/search/#data-types-and-search-examples), including code
samples from each of our official client libraries.
diff --git a/content/riak/kv/2.2.0/developing/usage.md b/content/riak/kv/2.2.0/developing/usage.md
index fd9b3f187e..5850027431 100644
--- a/content/riak/kv/2.2.0/developing/usage.md
+++ b/content/riak/kv/2.2.0/developing/usage.md
@@ -107,7 +107,7 @@ Tutorial on using Riak KV as a document store.
#### [Custom Extractors](./custom-extractors)
-Details on creating and registering custom extractors with Riak Search.
+Details on creating and registering custom extractors with Riak search.
[Learn More >>](./custom-extractors)
diff --git a/content/riak/kv/2.2.0/developing/usage/custom-extractors.md b/content/riak/kv/2.2.0/developing/usage/custom-extractors.md
index ec279957ed..fdc38e0594 100644
--- a/content/riak/kv/2.2.0/developing/usage/custom-extractors.md
+++ b/content/riak/kv/2.2.0/developing/usage/custom-extractors.md
@@ -15,8 +15,8 @@ aliases:
- /riak/kv/2.2.0/dev/search/custom-extractors
---
-Solr, and by extension Riak Search, has default extractors for a wide
-variety of data types, including JSON, XML, and plaintext. Riak Search
+Solr, and by extension Riak search, has default extractors for a wide
+variety of data types, including JSON, XML, and plaintext. Riak search
ships with the following extractors:
Content Type | Erlang Module
@@ -30,7 +30,7 @@ No specified type | `yz_noop_extractor`
There are also built-in extractors for [Riak Data Types](/riak/kv/2.2.0/developing/usage/searching-data-types).
If you're working with a data format that does not have a default Solr
-extractor, you can create your own and register it with Riak Search.
+extractor, you can create your own and register it with Riak search.
We'll show you how to do so by way of example.
## The Extractor Interface
@@ -195,7 +195,7 @@ extractor has been successfully registered.
## Verifying Our Custom Extractor
-Now that Riak Search knows how to decode and extract HTTP header packet
+Now that Riak search knows how to decode and extract HTTP header packet
data, let's store some in Riak and then query it. We'll put the example
packet data from above in a `google_packet.bin` file. Then, we'll `PUT`
that binary to Riak's `/search/extract` endpoint:
diff --git a/content/riak/kv/2.2.0/developing/usage/document-store.md b/content/riak/kv/2.2.0/developing/usage/document-store.md
index c14a0858e0..578c9a6e4f 100644
--- a/content/riak/kv/2.2.0/developing/usage/document-store.md
+++ b/content/riak/kv/2.2.0/developing/usage/document-store.md
@@ -16,18 +16,18 @@ aliases:
---
Although Riak wasn't explicitly created as a document store, two
-features recently added to Riak---[Riak Search](/riak/kv/2.2.0/developing/usage/search/) and [Riak Data Types](/riak/kv/2.2.0/developing/data-types/)---make it possible to use Riak as a
+features recently added to Riak---[Riak search](/riak/kv/2.2.0/developing/usage/search/) and [Riak Data Types](/riak/kv/2.2.0/developing/data-types/)---make it possible to use Riak as a
highly scalable document store with rich querying capabilities. In this
tutorial, we'll build a basic implementation of a document store using
[Riak maps](/riak/kv/2.2.0/developing/data-types/#maps).
## Basic Approach
-Riak Search enables you to implement a document store in Riak in a
+Riak search enables you to implement a document store in Riak in a
variety of ways. You could, for example, store and query JSON objects or
XML and then retrieve them later via Solr queries. In this tutorial,
however, we will store data in [Riak maps](/riak/kv/2.2.0/developing/data-types/#maps),
-index that data using Riak Search, and then run Solr queries against
+index that data using Riak search, and then run Solr queries against
those stored objects.
You can think of these Search indexes as **collections**. Each indexed
@@ -65,7 +65,7 @@ Date posted | Register | Datetime
Whether the post is currently in draft form | Flag | Boolean
Before we start actually creating and storing blog posts, let's set up
-Riak Search with an appropriate index and schema.
+Riak search with an appropriate index and schema.
## Creating a Schema and Index
@@ -209,7 +209,7 @@ curl -XPUT $RIAK_HOST/search/index/blog_posts \
Collections are not a concept that is native to Riak but we can easily
mimic collections by thinking of a bucket type as a collection. When we
-associate a bucket type with a Riak Search index, all of the objects
+associate a bucket type with a Riak search index, all of the objects
stored in any bucket of that bucket type will be queryable on the basis
of that one index. For this tutorial, we'll create a bucket type called
`cms` and think of that as a collection. We could also restrict our
diff --git a/content/riak/kv/2.2.0/developing/usage/search-schemas.md b/content/riak/kv/2.2.0/developing/usage/search-schemas.md
index 48344c673e..278587c811 100644
--- a/content/riak/kv/2.2.0/developing/usage/search-schemas.md
+++ b/content/riak/kv/2.2.0/developing/usage/search-schemas.md
@@ -19,21 +19,21 @@ aliases:
> **Note on Search 2.0 vs. Legacy Search**
>
-> This document refers to the new Riak Search 2.0 with
+> This document refers to the new Riak search 2.0 with
[Solr](http://lucene.apache.org/solr/) integration (codenamed
-Yokozuna). For information about the deprecated Riak Search, visit [the old Using Riak Search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
+Yokozuna). For information about the deprecated Riak search, visit [the old Using Riak search docs](http://docs.basho.com/riak/1.4.10/dev/using/search/).
-Riak Search is built for ease of use, allowing you to write values into
-Riak and query for values using Solr. Riak Search does a lot of work
+Riak search is built for ease of use, allowing you to write values into
+Riak and query for values using Solr. Riak search does a lot of work
under the hood to convert your values---plain text, JSON, XML, [Riak Data Types](/riak/kv/2.2.0/developing/data-types/), and [more](/riak/kv/2.2.0/developing/usage/custom-extractors)---into something that can be indexed and searched later.
Nonetheless, you must still instruct Riak/Solr how to index a value. Are
you providing and array of strings? An integer? A date? Is your text in
-English or Russian? You can provide such instructions to Riak Search by
+English or Russian? You can provide such instructions to Riak search by
defining a Solr **schema**.
## The Default Schema
-Riak Search comes bundled with a default schema named `_yz_default`. The
+Riak search comes bundled with a default schema named `_yz_default`. The
default schema covers a wide range of possible field types. You can find
the default schema [on GitHub](https://raw.github.com/basho/yokozuna/develop/priv/default_schema.xml).
While using the default schema provides an easy path to starting
@@ -47,7 +47,7 @@ amounts of disk space, so pay special attention to those indexes.
We'll show you how you can create custom schemas by way of example.
Let's say that you have already created a schema named `cartoons` in a
file named `cartoons.xml`. This would register the custom schema in Riak
-Search:
+search:
```java
import org.apache.commons.io.FileUtils;
@@ -123,11 +123,11 @@ curl -XPUT http://localhost:8098/search/schema/cartoons \
The first step in creating a custom schema is to define exactly what
fields you must index. Part of that step is understanding how Riak
-Search extractors function.
+search extractors function.
### Extractors
-In Riak Search, extractors are modules responsible for pulling out a
+In Riak search, extractors are modules responsible for pulling out a
list of fields and values from a Riak object. How this is achieved
depends on the object's content type, but the two common cases are JSON
and XML, which operate similarly. Our examples here will use JSON.
@@ -175,21 +175,21 @@ Solr schemas can be very complex, containing many types and analyzers.
Refer to the [Solr 4.7 reference
guide](http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-4.7.pdf)
for a complete list. You should be aware, however, that there are a few
-fields that are required by Riak Search in order to properly distribute
+fields that are required by Riak search in order to properly distribute
an object across a [cluster][concept clusters]. These fields are all prefixed
with `_yz`, which stands for
[Yokozuna](https://github.com/basho/yokozuna), the original code name
-for Riak Search.
+for Riak search.
Below is a bare minimum skeleton Solr Schema. It won't do much for you
-other than allow Riak Search to properly manage your stored objects.
+other than allow Riak search to properly manage your stored objects.
```xml
-
+
@@ -210,14 +210,14 @@ other than allow Riak Search to properly manage your stored objects.
```
-If you're missing any of the above fields, Riak Search will reject your
+If you're missing any of the above fields, Riak search will reject your
custom schema. The value for `` _must_ be `_yz_id`.
In the table below, you'll find a description of the various required
fields. You'll rarely need to use any fields other than `_yz_rt` (bucket
type), `_yz_rb` (bucket) and `_yz_rk` (Riak key). On occasion, `_yz_err`
can be helpful if you suspect that your extractors are failing.
-Malformed JSON or XML will cause Riak Search to index a key and set
+Malformed JSON or XML will cause Riak search to index a key and set
`_yz_err` to 1, allowing you to reindex with proper values later.
Field | Name | Description
@@ -262,7 +262,7 @@ field, you also must set `multiValued` to `true`.
-
+
diff --git a/content/riak/kv/2.2.0/developing/usage/search.md b/content/riak/kv/2.2.0/developing/usage/search.md
index 56b1c0f0d2..276f39c01f 100644
--- a/content/riak/kv/2.2.0/developing/usage/search.md
+++ b/content/riak/kv/2.2.0/developing/usage/search.md
@@ -29,7 +29,7 @@ to properly store and later query for values.
3. **Bucket-index association** signals to Riak *when* to index values
(this also includes bucket type-index association)
-Riak Search must first be configured with a Solr schema so that Solr
+Riak search must first be configured with a Solr schema so that Solr
knows how to index value fields. If you don't define one, you're
provided with a default schema named `_yz_default`, which can be found
[on
@@ -38,7 +38,7 @@ GitHub](https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_sc
The examples in this document will presume the default. You can read
more about creating custom schemas in [Search Schema][usage search schema], which you'll likely want to use in a production environment.
-Next, you must create a named Solr index through Riak Search. This index
+Next, you must create a named Solr index through Riak search. This index
represents a collection of similar data that you connect with to perform
queries. When creating an index, you can optionally provide a schema. If
you do not, the default schema will be used. Here we'll `curl` create an
@@ -246,7 +246,7 @@ More information can be found in the [Solr
documentation](http://wiki.apache.org/solr/SolrPerformanceFactors).
With a Solr schema, index, and association in place (and possibly a
-security setup as well), we're ready to start using Riak Search. First,
+security setup as well), we're ready to start using Riak search. First,
populate the `cat` bucket with values, in this case information about
four cats: Liono, Cheetara, Snarf, and Panthro.
@@ -495,12 +495,12 @@ curl -XPUT $RIAK_HOST/types/animals/buckets/cats/keys/panthro \
```
If you've used Riak before, you may have noticed that this is no
-different from storing values without Riak Search. That's because we
-designed Riak Search with the following design goal in mind:
+different from storing values without Riak search. That's because we
+designed Riak search with the following design goal in mind:
#### Write it like Riak, query it like Solr
-But how does Riak Search know how to index values, given that you can
+But how does Riak search know how to index values, given that you can
store opaque values in Riak? For that, we employ extractors.
## Extractors
@@ -510,7 +510,7 @@ content type and convert it into a list of fields that can be indexed by
Solr. This is done transparently and automatically as part of the
indexing process. You can even create your own [custom extractors](/riak/kv/2.2.0/developing/usage/custom-extractors).
-Our current example uses the JSON extractor, but Riak Search also
+Our current example uses the JSON extractor, but Riak search also
extracts indexable fields from the following content types:
* JSON (`application/json`)
@@ -560,7 +560,7 @@ one of the default types. A full tutorial can be found in [Custom Search Extract
### Automatic Fields
-When a Riak object is indexed, Riak Search automatically inserts a few
+When a Riak object is indexed, Riak search automatically inserts a few
extra fields as well. These are necessary for a variety of technical
reasons, and for the most part you don't need to think about them.
However, there are a few fields which you may find useful:
@@ -1248,7 +1248,7 @@ curl "$RIAK_HOST/search/query/famous?wt=json&q=*:*&start=$START&rows=$ROWS_PER_P
### Pagination Warning
-Distributed pagination in Riak Search cannot be used reliably when
+Distributed pagination in Riak search cannot be used reliably when
sorting on fields that can have different values per replica of the same
object, namely `score` and `_yz_id`. In the case of sorting by these
fields, you may receive redundant objects. In the case of `score`, the
@@ -1272,14 +1272,14 @@ fix this shortcoming in a future version of Riak.
### MapReduce
-Riak Search allows for piping search results as inputs for
+Riak search allows for piping search results as inputs for
[MapReduce](/riak/kv/2.2.0/developing/usage/mapreduce/) jobs. This is a useful cross-section for
performing post-calculations of results or aggregations of ad-hoc
-queries. The Riak Search MapReduce integration works similarly to
+queries. The Riak search MapReduce integration works similarly to
regular MapReduce, with the notable exception that your input is not a
bucket, but rather index and query arguments to the `yokozuna` module
and `mapred_search` function (an Erlang `module:function` pair that adds
-the Riak Search hook to MapReduce).
+the Riak search hook to MapReduce).
```json
{
diff --git a/content/riak/kv/2.2.0/developing/usage/secondary-indexes.md b/content/riak/kv/2.2.0/developing/usage/secondary-indexes.md
index e2c0320056..e61d0576d3 100644
--- a/content/riak/kv/2.2.0/developing/usage/secondary-indexes.md
+++ b/content/riak/kv/2.2.0/developing/usage/secondary-indexes.md
@@ -19,12 +19,12 @@ aliases:
[plan backend memory]: /riak/kv/2.2.0/setup/planning/backend/memory
[use ref strong consistency]: /riak/kv/2.2.0/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.2.0/developing/usage/search/) rather than secondary indexes for
-a variety of reasons. Most importantly, Riak Search has a far more
+recommend [Riak search](/riak/kv/2.2.0/developing/usage/search/) rather than secondary indexes for
+a variety of reasons. Most importantly, Riak search has a far more
capacious querying API and can be used with all of Riak's storage
backends.
@@ -37,7 +37,7 @@ Secondary indexes can be either a binary or string, such as
`sensor_1_data` or `admin_user` or `click_event`, or an integer, such as
`99` or `141121`.
-[Riak Search](/riak/kv/2.2.0/developing/usage/search/) serves analogous purposes but is quite
+[Riak search](/riak/kv/2.2.0/developing/usage/search/) serves analogous purposes but is quite
different because it parses key/value data itself and builds indexes on
the basis of Solr schemas.
@@ -75,7 +75,7 @@ you to discover them later. Indexing enables you to tag those objects
and find all objects with the same tag in a specified bucket later on.
2i is thus recommended when your use case requires an easy-to-use search
-mechanism that does not require a schema (as does [Riak Search](/riak/kv/2.2.0/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
+mechanism that does not require a schema (as does [Riak search](/riak/kv/2.2.0/using/reference/search/#schemas)) and a basic query interface, i.e. an interface that
enables an application to tell Riak things like "fetch all objects
tagged with the string `Milwaukee_Bucks`" or "fetch all objects tagged
with numbers between 1500 and 1509."
@@ -89,7 +89,7 @@ piggybacks off of read-repair.
* If your ring size exceeds 512 partitions, 2i can cause performance
issues in large clusters.
* When you need more than the exact match and range searches that 2i
- supports. If that's the case, we recommend checking out [Riak Search](/riak/kv/2.2.0/developing/usage/search/).
+ supports. If that's the case, we recommend checking out [Riak search](/riak/kv/2.2.0/developing/usage/search/).
* When you want to use composite queries. A query like
`last_name=zezeski AND state=MD` would have to be split into two
queries and the results merged (or it would need to involve
diff --git a/content/riak/kv/2.2.0/learn/concepts/strong-consistency.md b/content/riak/kv/2.2.0/learn/concepts/strong-consistency.md
index 704df76ba4..c85140ebc6 100644
--- a/content/riak/kv/2.2.0/learn/concepts/strong-consistency.md
+++ b/content/riak/kv/2.2.0/learn/concepts/strong-consistency.md
@@ -20,7 +20,7 @@ aliases:
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
Riak was originally designed as an [eventually consistent](/riak/kv/2.2.0/learn/concepts/eventual-consistency) system, fundamentally geared toward providing partition
diff --git a/content/riak/kv/2.2.0/learn/glossary.md b/content/riak/kv/2.2.0/learn/glossary.md
index 8589eced53..070695dc58 100644
--- a/content/riak/kv/2.2.0/learn/glossary.md
+++ b/content/riak/kv/2.2.0/learn/glossary.md
@@ -275,7 +275,7 @@ best described as "UNIX pipes for Riak."
## Riak Search
-Riak Search is a distributed, scalable, failure-tolerant, realtime,
+Riak search is a distributed, scalable, failure-tolerant, realtime,
full-text search engine integrating [Apache
Solr](https://lucene.apache.org/solr/) with Riak KV.
diff --git a/content/riak/kv/2.2.0/learn/use-cases.md b/content/riak/kv/2.2.0/learn/use-cases.md
index 04b47889b4..861d94c845 100644
--- a/content/riak/kv/2.2.0/learn/use-cases.md
+++ b/content/riak/kv/2.2.0/learn/use-cases.md
@@ -153,7 +153,7 @@ For storing log data from different systems, you could use unique
buckets for each system (e.g. `system1_log_data`, `system2_log_data`,
etc.) and write associated logs to the corresponding buckets. To
analyze that data, you could use Riak's MapReduce system for aggregation
-tasks, such as summing the counts of records for a date or Riak Search
+tasks, such as summing the counts of records for a date or Riak search
for a more robust, text-based queries.
### Log Data Complex Case
@@ -237,7 +237,7 @@ For simple retrieval of a specific account, a user ID (plus perhaps a
secondary index on a username or email) is enough. If you foresee the
need to make queries on additional user attributes (e.g. creation time,
user type, or region), plan ahead and either set up additional secondary
-indexes or consider using [Riak Search][usage search] to index the JSON
+indexes or consider using [Riak search][usage search] to index the JSON
contents of the user account.
### User Accounts Community Examples
@@ -323,7 +323,7 @@ In Riak, you can store content of any kind, from HTML files to plain
text to JSON or XML or another document type entirely. Keep in mind that
data in Riak is opaque, with the exception of [Riak Data Types](/riak/kv/2.2.0/developing/data-types),
and so Riak won't "know" about the object unless it is indexed
-[using Riak Search][usage search] or [using secondary indexes][usage secondary-indexes].
+[using Riak search][usage search] or [using secondary indexes][usage secondary-indexes].
### Articles et al Complex Case
@@ -342,7 +342,7 @@ with comments would require your application to call from the posts
and comments buckets to assemble the view.
Other possible cases may involve performing operations on content beyond
-key/value pairs. [Riak Search][usage search] is recommended for use cases
+key/value pairs. [Riak search][usage search] is recommended for use cases
involving full-text search. For lighter-weight querying,
[using secondary indexes][usage secondary-indexes] \(2i) enables you to add metadata to objects to
either query for exact matches or to perform range queries. 2i also
diff --git a/content/riak/kv/2.2.0/release-notes.md b/content/riak/kv/2.2.0/release-notes.md
index 31ce1fadf4..2810d39d25 100644
--- a/content/riak/kv/2.2.0/release-notes.md
+++ b/content/riak/kv/2.2.0/release-notes.md
@@ -44,7 +44,7 @@ AAE trees are versioned, so if you choose to enable the 2.2.0 AAE improvements,
## Downgrading
-### Riak search users
+### Riak Search users
The upgrade to Solr 4.10.4 causes new data written to the cluster to be written in a format that is incompatible with earlier versions of Solr (and, therefore, earlier versions of Riak KV). The [Upgrade](/riak/kv/2.2.0/setup/upgrading/version/) and [Downgrade](/riak/kv/2.2.0/setup/downgrade/) documentation describes the steps you will need to take to reindex your data in a rolling fashion. Be aware this can make downgrades take a very long time, but will minimize exposure of the downgrading nodes to applications that utilize the Riak search feature.
diff --git a/content/riak/kv/2.2.0/setup/downgrade.md b/content/riak/kv/2.2.0/setup/downgrade.md
index 2aaf6c0679..75ea33492d 100644
--- a/content/riak/kv/2.2.0/setup/downgrade.md
+++ b/content/riak/kv/2.2.0/setup/downgrade.md
@@ -31,7 +31,7 @@ For every node in the cluster:
1. Stop Riak KV.
2. Back up Riak's `etc` and `data` directories.
3. Downgrade the Riak KV.
-4. Remove Riak Search index and temporary data.
+4. Remove Riak search index and temporary data.
5. Reconfigure Solr cores.
6. Start Riak KV and disable Riak search.
7. Monitor the reindex of the data.
@@ -47,7 +47,7 @@ For every node in the cluster:
| Feature | automatic | required | Notes |
|:---|:---:|:---:|:---|
-|Migration to Solr 4.10.4 |✔ | ✔| Applies to all clusters using Riak Search.
+|Migration to Solr 4.10.4 |✔ | ✔| Applies to all clusters using Riak search.
| Active Anti-Entropy file format changes | ✔ | | Can be opted out using a [capability](#aae_tree_capability)
@@ -65,7 +65,7 @@ While the cluster contains mixed version members, if you have not set the cluste
This is benign and similar to the `not_built` and `already_locked` errors which can be seen during normal AAE operation. These events will stop once the downgrade is complete.
{{% /note %}}
-### Stop Riak KV and remove Riak search index & temporary data
+### Stop Riak KV and remove Riak Search index & temporary data
1\. Stop Riak KV:
@@ -133,7 +133,7 @@ riak-admin wait-for-service yokozuna
8\. Run `riak attach`.
- 1. Run the following snippet to prevent this node from participating in distributed Riak Search queries:
+ 1. Run the following snippet to prevent this node from participating in distributed Riak search queries:
```
riak_core_node_watcher:service_down(yokozuna).
diff --git a/content/riak/kv/2.2.0/setup/installing/source/jvm.md b/content/riak/kv/2.2.0/setup/installing/source/jvm.md
index b38756a893..afb2156780 100644
--- a/content/riak/kv/2.2.0/setup/installing/source/jvm.md
+++ b/content/riak/kv/2.2.0/setup/installing/source/jvm.md
@@ -21,10 +21,10 @@ aliases:
[usage search]: /riak/kv/2.2.0/developing/usage/search
-If you are using [Riak Search 2.0][usage search], codename Yokozuna,
+If you are using [Riak search 2.0][usage search], codename Yokozuna,
you will need to install **Java 1.6 or later** to run [Apache
Solr](https://lucene.apache.org/solr/), the search platform that powers
-Riak Search.
+Riak search.
We recommend using Oracle's [JDK
7u25](http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html).
@@ -35,7 +35,7 @@ page](http://www.oracle.com/technetwork/java/javase/documentation/index.html).
## Installing Solr on OS X
-If you're using Riak Search on Mac OS X, you may see the following
+If you're using Riak search on Mac OS X, you may see the following
error:
```java
diff --git a/content/riak/kv/2.2.0/setup/planning/backend/bitcask.md b/content/riak/kv/2.2.0/setup/planning/backend/bitcask.md
index b4467781d4..6da678c645 100644
--- a/content/riak/kv/2.2.0/setup/planning/backend/bitcask.md
+++ b/content/riak/kv/2.2.0/setup/planning/backend/bitcask.md
@@ -751,7 +751,7 @@ bitcask.expiry.grace_time = 1h
#### Automatic expiration and Riak Search
-If you are using [Riak Search][usage search] in conjunction with
+If you are using [Riak search][usage search] in conjunction with
Bitcask, please be aware that automatic expiry does not apply to [Search Indexes](../../../../developing/usage/search). If objects are indexed using Search,
those objects can be expired by Bitcask yet still registered in Search
indexes, which means that Search queries may return keys that no longer
diff --git a/content/riak/kv/2.2.0/setup/upgrading/checklist.md b/content/riak/kv/2.2.0/setup/upgrading/checklist.md
index 1638650a31..79937f43a0 100644
--- a/content/riak/kv/2.2.0/setup/upgrading/checklist.md
+++ b/content/riak/kv/2.2.0/setup/upgrading/checklist.md
@@ -88,7 +88,7 @@ We've compiled these considerations and questions into separate categories for y
place if `allow_mult` is set to `true`?
- Have you carefully weighed the [consistency trade-offs][concept eventual consistency] that must be made if `allow_mult` is set to `false`?
- Are all of your [apps replication properties][apps replication properties] configured correctly and uniformly across the cluster?
- - If you are using [Riak Search][usage search], is it enabled on all
+ - If you are using [Riak search][usage search], is it enabled on all
nodes? If you are not, has it been disabled on all nodes?
- If you are using [strong consistency][concept strong consistency] for some or all of your
data:
diff --git a/content/riak/kv/2.2.0/using/admin/riak-admin.md b/content/riak/kv/2.2.0/using/admin/riak-admin.md
index 144e773ba1..b88c65471d 100644
--- a/content/riak/kv/2.2.0/using/admin/riak-admin.md
+++ b/content/riak/kv/2.2.0/using/admin/riak-admin.md
@@ -585,7 +585,7 @@ riak-admin repair-2i kill
## search
The search command provides sub-commands for various administrative
-work related to the new Riak Search.
+work related to the new Riak search.
```bash
riak-admin search
@@ -634,7 +634,7 @@ riak-admin search switch-to-new-search
```
Switch handling of the HTTP `/solr//select` resource and
-protocol buffer query messages from legacy Riak Search to new Search
+protocol buffer query messages from legacy Riak search to new Search
(Yokozuna).
## services
diff --git a/content/riak/kv/2.2.0/using/cluster-operations/active-anti-entropy.md b/content/riak/kv/2.2.0/using/cluster-operations/active-anti-entropy.md
index 47dfd4d023..705af76dc3 100644
--- a/content/riak/kv/2.2.0/using/cluster-operations/active-anti-entropy.md
+++ b/content/riak/kv/2.2.0/using/cluster-operations/active-anti-entropy.md
@@ -265,9 +265,9 @@ AAE-related background tasks, analogous to [open files limit](../../performance/
## AAE and Riak Search
Riak's AAE subsystem works to repair object inconsistencies both with
-for normal key/value objects as well as data related to [Riak Search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
+for normal key/value objects as well as data related to [Riak search](../../../developing/usage/search). In particular, AAE acts on indexes stored in
[Solr](http://lucene.apache.org/solr/), the search platform that drives
-Riak Search. Implementation details for AAE and Search can be found in
+Riak search. Implementation details for AAE and Search can be found in
the [Search Details](../../reference/search/#active-anti-entropy-aae)
documentation.
diff --git a/content/riak/kv/2.2.0/using/cluster-operations/inspecting-node.md b/content/riak/kv/2.2.0/using/cluster-operations/inspecting-node.md
index e119f92188..ec097f33a3 100644
--- a/content/riak/kv/2.2.0/using/cluster-operations/inspecting-node.md
+++ b/content/riak/kv/2.2.0/using/cluster-operations/inspecting-node.md
@@ -297,7 +297,7 @@ Stat | Description
`erlydtl_version` | [ErlyDTL](http://github.com/erlydtl/erlydtl)
`riak_control_version` | [Riak Control](http://github.com/basho/riak_control)
`cluster_info_version` | [Cluster Information](http://github.com/basho/cluster_info)
-`riak_search_version` | [Riak Search](http://github.com/basho/riak_search)
+`riak_search_version` | [Riak search](http://github.com/basho/riak_search)
`merge_index_version` | [Merge Index](http://github.com/basho/merge_index)
`riak_kv_version` | [Riak KV](http://github.com/basho/riak_kv)
`sidejob_version` | [Sidejob](http://github.com/basho/sidejob)
@@ -326,17 +326,17 @@ Stat | Description
### Riak Search Statistics
-The following statistics related to Riak Search message queues are
+The following statistics related to Riak search message queues are
available.
Stat | Description
-----------------------------|---------------------------------------------------
-`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node in the last minute
-`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak Search subsystem have received on this node since it was started
-`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak Search subsystem
+`riak_search_vnodeq_max` | Maximum number of unprocessed messages all virtual node (vnode) message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_mean` | Mean number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_median` | Median number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_min` | Minimum number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node in the last minute
+`riak_search_vnodeq_total` | Total number of unprocessed messages all vnode message queues in the Riak search subsystem have received on this node since it was started
+`riak_search_vnodes_running` | Total number of vnodes currently running in the Riak search subsystem
Note that under ideal operation and with the exception of
`riak_search_vnodes_running` these statistics should contain low values
@@ -449,7 +449,7 @@ Check | Description
`ring_membership` | Cluster membership validity
`ring_preflists` | Check if the ring satisfies `n_val`
`ring_size` | Check if the ring size valid
-`search` | Check whether Riak Search is enabled on all nodes
+`search` | Check whether Riak search is enabled on all nodes
The `--level` flag enables you to specify the log level and thus to
filter messages based on type. You can pass in any of the message types
diff --git a/content/riak/kv/2.2.0/using/cluster-operations/strong-consistency.md b/content/riak/kv/2.2.0/using/cluster-operations/strong-consistency.md
index f584539791..4bae97c6e7 100644
--- a/content/riak/kv/2.2.0/using/cluster-operations/strong-consistency.md
+++ b/content/riak/kv/2.2.0/using/cluster-operations/strong-consistency.md
@@ -14,7 +14,7 @@ toc: true
Please Note:
-Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak Search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
+Riak KV's strong consistency is an experimental feature and may be removed from the product in the future. Strong consistency is not commercially supported or production-ready. Strong consistency is incompatible with Multi-Datacenter Replication, Riak search, Bitcask Expiration, LevelDB Secondary Indexes, Riak Data Types and Commit Hooks. We do not recommend its usage in any production environment.
## Monitoring Strong Consistency
diff --git a/content/riak/kv/2.2.0/using/reference/handoff.md b/content/riak/kv/2.2.0/using/reference/handoff.md
index 82e7b4e7cf..4839618e27 100644
--- a/content/riak/kv/2.2.0/using/reference/handoff.md
+++ b/content/riak/kv/2.2.0/using/reference/handoff.md
@@ -121,7 +121,7 @@ handoff.use_background_manager = on
### Maximum Rejects
-If you're using Riak features such as [Riak Search](/riak/kv/2.2.0/developing/usage/search/),
+If you're using Riak features such as [Riak search](/riak/kv/2.2.0/developing/usage/search/),
those subsystems can block handoff of primary key/value data, i.e. data
that you interact with via normal reads and writes.
diff --git a/content/riak/kv/2.2.0/using/reference/search.md b/content/riak/kv/2.2.0/using/reference/search.md
index a6e66351a4..88b154c8b4 100644
--- a/content/riak/kv/2.2.0/using/reference/search.md
+++ b/content/riak/kv/2.2.0/using/reference/search.md
@@ -36,30 +36,30 @@ search, you should check out the [Using Search](/riak/kv/2.2.0/developing/usage/
In Erlang OTP, an "application" is a group of modules and Erlang
processes which together perform a specific task. The word application
is confusing because most people think of an application as an entire
-program such as Emacs or Photoshop. But Riak Search is just a sub-system
+program such as Emacs or Photoshop. But Riak search is just a sub-system
in Riak itself. Erlang applications are often stand-alone, but Riak
-Search is more like an appendage of Riak. It requires other subsystems
+search is more like an appendage of Riak. It requires other subsystems
like Riak Core and KV, but also extends their functionality by providing
search capabilities for KV data.
-The purpose of Riak Search is to bring more sophisticated and robust
+The purpose of Riak search is to bring more sophisticated and robust
query and search support to Riak. Many people consider Lucene and
programs built on top of it, such as Solr, as the standard for
open-source search. There are many successful applications built on
Lucene/Solr, and it sets the standard for the feature set that
developers and users expect. Meanwhile, Riak has a great story as a
-highly-available, distributed key/value store. Riak Search takes
+highly-available, distributed key/value store. Riak search takes
advantage of the fact that Riak already knows how to do the distributed
bits, combining its feature set with that of Solr, taking advantage of
the strengths of each.
-Riak Search is a mediator between Riak and Solr. There is nothing
+Riak search is a mediator between Riak and Solr. There is nothing
stopping a user from deploying these two programs separately, but this
would leave the user responsible for the glue between them. That glue
can be tricky to write. It requires dealing with monitoring, querying,
indexing, and dissemination of information.
-Unlike Solr by itself, Riak Search knows how to do all of the following:
+Unlike Solr by itself, Riak search knows how to do all of the following:
* Listen for changes in key/value (KV) data and to make the appropriate
changes to indexes that live in Solr. It also knows how to take a user
@@ -76,13 +76,13 @@ system (OS) process running a JVM which hosts Solr on the Jetty
application server. This OS process is a child of the Erlang OS process
running Riak.
-Riak Search has a `gen_server` process which monitors the JVM OS
+Riak search has a `gen_server` process which monitors the JVM OS
process. The code for this server is in `yz_solr_proc`. When the JVM
process crashes, this server crashes, causing its supervisor to restart
it.
If there is more than 1 restart in 45 seconds, the entire Riak node will
-be shut down. If Riak Search is enabled and Solr cannot function for
+be shut down. If Riak search is enabled and Solr cannot function for
some reason, the Riak node needs to go down so that the user will notice
and take corrective action.
@@ -92,7 +92,7 @@ This double monitoring along with the crash semantics means that neither
process may exist without the other. They are either both up or both
down.
-All other communication between Riak Search and Solr is performed via
+All other communication between Riak search and Solr is performed via
HTTP, including querying, indexing, and administration commands. The
ibrowse Erlang HTTP client is used to manage these communications as
both it and the Jetty container hosting Solr pool HTTP connections,
@@ -106,11 +106,11 @@ contains index entries for objects. Each such index maintains its own
set of files on disk---a critical difference from Riak KV, in which a
bucket is a purely logical entity and not physically disjoint at all. A
Solr index requires significantly less disk space than the corresponding
-legacy Riak Search index, depending on the Solr schema used.
+legacy Riak search index, depending on the Solr schema used.
Indexes may be associated with zero or more buckets. At creation time,
however, each index has no associated buckets---unlike the legacy Riak
-Search, indexes in the new Riak Search do not implicitly create bucket
+search, indexes in the new Riak search do not implicitly create bucket
associations, meaning that this must be done as a separate configuration
step.
@@ -142,7 +142,7 @@ flat collection of field-value pairs. "Flat" here means that a field's
value cannot be a nested structure of field-value pairs; the values are
treated as-is (non-composite is another way to say it).
-Because of this mismatch between KV and Solr, Riak Search must act as a
+Because of this mismatch between KV and Solr, Riak search must act as a
mediator between the two, meaning it must have a way to inspect a KV
object and create a structure which Solr can ingest for indexing. In
Solr this structure is called a **document**. This task of creating a
@@ -405,7 +405,7 @@ one with a smaller window.
## Statistics
-The Riak Search batching subsystem provides statistics on run-time characteristics of search system components. These statistics are accessible via the standard Riak KV stats interfaces and can be monitored through standard enterprise management tools.
+The Riak search batching subsystem provides statistics on run-time characteristics of search system components. These statistics are accessible via the standard Riak KV stats interfaces and can be monitored through standard enterprise management tools.
* `search_index_throughput_(count|one)` - The total count of objects that have been indexed, per Riak node, and the count of objects that have been indexed within the metric measurement window.
diff --git a/content/riak/kv/2.2.0/using/reference/secondary-indexes.md b/content/riak/kv/2.2.0/using/reference/secondary-indexes.md
index 8d29018d4c..40c5f492c1 100644
--- a/content/riak/kv/2.2.0/using/reference/secondary-indexes.md
+++ b/content/riak/kv/2.2.0/using/reference/secondary-indexes.md
@@ -18,11 +18,11 @@ aliases:
[usage bucket types]: /riak/kv/2.2.0/developing/usage/bucket-types
[use ref strong consistency]: /riak/kv/2.2.0/using/reference/strong-consistency
-> **Note: Riak Search preferred for querying**
+> **Note: Riak search preferred for querying**
>
> If you're interested in non-primary-key-based querying in Riak, i.e. if
you're looking to go beyond straightforward K/V operations, we now
-recommend [Riak Search](/riak/kv/2.2.0/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak Search has a far more capacious querying API and can be used with all of Riak's storage backends.
+recommend [Riak search](/riak/kv/2.2.0/developing/usage/search/) rather than secondary indexes for a variety of reasons. Riak search has a far more capacious querying API and can be used with all of Riak's storage backends.
This document provides implementation and other details for Riak's
[secondary indexes](/riak/kv/2.2.0/developing/usage/secondary-indexes/) \(2i) feature.
diff --git a/content/riak/kv/2.2.0/using/reference/statistics-monitoring.md b/content/riak/kv/2.2.0/using/reference/statistics-monitoring.md
index a5d337b191..b747e91e85 100644
--- a/content/riak/kv/2.2.0/using/reference/statistics-monitoring.md
+++ b/content/riak/kv/2.2.0/using/reference/statistics-monitoring.md
@@ -134,7 +134,7 @@ Metric | Also | Notes
:------|:-----|:------------------
```node_get_fsm_siblings_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of siblings encountered during all GET operations by this node within the last minute. Watch for abnormally high sibling counts, especially max ones.
```node_get_fsm_objsize_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Object size encountered by this node within the last minute. Abnormally large objects (especially paired with high sibling counts) can indicate sibling explosion.
-```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak Search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
+```riak_search_vnodeq_mean``` | ```_median```, ```_95```, ```_99```, ```_100``` | Number of unprocessed messages in the vnode message queues of the Riak search subsystem on this node in the last minute. The queues give you an idea of how backed up Solr is getting.
```search_index_fail_one``` | | Number of "Failed to index document" errors Search encountered for the last minute
```pbc_active``` | | Number of currently active protocol buffer connections
```pbc_connects``` | | Number of new protocol buffer connections established during the last minute
diff --git a/content/riak/kv/2.2.0/using/repair-recovery/errors.md b/content/riak/kv/2.2.0/using/repair-recovery/errors.md
index 3380d878d9..db08a36a60 100644
--- a/content/riak/kv/2.2.0/using/repair-recovery/errors.md
+++ b/content/riak/kv/2.2.0/using/repair-recovery/errors.md
@@ -328,7 +328,7 @@ exit with reason bad return value: {error,eaddrinuse} in context start_error | A
exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 | An error like this example can result when Riak cannot bind to the addresses specified in the configuration. In this case, you should verify HTTP and Protocol Buffers addresses in `app.config` and ensure that the ports being used are not in the privileged (1-1024) range as the `riak` user will not have access to such ports.
gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@192.168.2.2', []) line 72 | Error output like this example can indicate that a previously running Riak node with an original `-name` value in `vm.args` has been modified by simply changing the value in `vm.args` and not properly through `riak-admin cluster replace`.
** Configuration error: [FRAMEWORK-MIB]: missing context.conf file => generating a default file | This error is commonly encountered when starting Riak Enterprise without prior [SNMP](/riak/kv/2.2.0/using/reference/snmp) configuration.
-RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak Search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak Search.
+RPC to 'node@example.com' failed: {'EXIT', {badarg, [{ets,lookup, [schema_table,<<"search-example">>], []} {riak_search_config,get_schema,1, [{file,"src/riak_search_config.erl"}, {line,69}]} ...| This error can be caused when attempting to use Riak search without first enabling it in each node's `app.config`. See the [configuration files][config reference] documentation for more information on enabling Riak search.
### More
diff --git a/content/riak/kv/2.2.0/using/repair-recovery/repairs.md b/content/riak/kv/2.2.0/using/repair-recovery/repairs.md
index 48c8d254b5..d920ae2f2e 100644
--- a/content/riak/kv/2.2.0/using/repair-recovery/repairs.md
+++ b/content/riak/kv/2.2.0/using/repair-recovery/repairs.md
@@ -57,7 +57,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.2.0/using/repair-recovery/secondary-indexes.md b/content/riak/kv/2.2.0/using/repair-recovery/secondary-indexes.md
index 170cdf9df3..47c251a2ec 100644
--- a/content/riak/kv/2.2.0/using/repair-recovery/secondary-indexes.md
+++ b/content/riak/kv/2.2.0/using/repair-recovery/secondary-indexes.md
@@ -51,7 +51,7 @@ riak-admin repair-2i kill
## Repairing Search Indexes
-Riak Search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
+Riak search indexes currently have no form of anti-entropy (such as read-repair). Furthermore, for performance and load balancing reasons, Search reads from one random node. This means that when a replica loss has occurred, inconsistent results may be returned.
### Running a Repair
diff --git a/content/riak/kv/2.2.0/using/security/basics.md b/content/riak/kv/2.2.0/using/security/basics.md
index d5eeed7158..9a9c1da1ed 100644
--- a/content/riak/kv/2.2.0/using/security/basics.md
+++ b/content/riak/kv/2.2.0/using/security/basics.md
@@ -44,7 +44,7 @@ when turning on Riak security. Missing one of these steps will almost
certainly break your application, so make sure that you have done each
of the following **before** enabling security:
-1. Make certain that the original Riak Search (version 1) and link
+1. Make certain that the original Riak search (version 1) and link
walking are not required. Enabling security will break this
functionality. If you wish to use security and Search together, you
will need to use the [new Search feature](/riak/kv/2.2.0/developing/usage/search/).
@@ -487,11 +487,11 @@ Permission | Operation
### Search Query Permission (Riak Search version 1)
Security is incompatible with the original (and now deprecated) Riak
-Search. Riak Search version 1 will stop working if security is enabled.
+search. Riak search version 1 will stop working if security is enabled.
### Search Query Permissions (Riak Search version 2, aka Yokozuna)
-If you are using the new Riak Search, i.e. the Solr-compatible search
+If you are using the new Riak search, i.e. the Solr-compatible search
capabilities included with Riak versions 2.0 and greater, the following
search-related permissions can be granted/revoked:
@@ -508,8 +508,8 @@ disabled, you will get the following error:
>
> `{error,{unknown_permission,"search.query"}}`
>
-> More information on Riak Search and how to enable it can be found in the
-[Riak Search Settings](/riak/kv/2.2.0/configuring/search/) document.
+> More information on Riak search and how to enable it can be found in the
+[Riak search Settings](/riak/kv/2.2.0/configuring/search/) document.
#### Usage Examples
diff --git a/content/riak/ts/1.0.0/releasenotes.md b/content/riak/ts/1.0.0/releasenotes.md
index e9a052a73c..01d14b0bbe 100644
--- a/content/riak/ts/1.0.0/releasenotes.md
+++ b/content/riak/ts/1.0.0/releasenotes.md
@@ -63,7 +63,7 @@ Riak TS is compatible with the following operating systems:
## Known Issues
* AAE must be turned off.
-* Riak Search is not supported.
+* Riak search is not supported.
* Multi-Datacenter Replication is not supported.
* When deleting, a PUT occurs to write the tombstone, then a GET reaps the tombstone. Since PUT and GET are asynchronous, it is possible for the GET to occur before the PUT resulting in the data not actually being deleted. If this occurs, issue the DELETE again.
* It is possible to write incorrect data (data that does not match the schema) into rows other than the first row. For instance, it is possible to input an integer for 'double'. In these cases, the write will succeed but any READ or query that includes the incorrect row will fail.
\ No newline at end of file
diff --git a/content/riak/ts/1.1.0/releasenotes.md b/content/riak/ts/1.1.0/releasenotes.md
index 134e8a6bb5..2cfe917265 100644
--- a/content/riak/ts/1.1.0/releasenotes.md
+++ b/content/riak/ts/1.1.0/releasenotes.md
@@ -116,6 +116,6 @@ Riak TS is compatible with the following operating systems:
## Known Issues
* AAE must be turned off.
-* Riak Search is not supported.
+* Riak search is not supported.
* Multi-Datacenter Replication is not supported.
* Arithmetic operations and aggregates cannot currently be combined.
diff --git a/content/riak/ts/1.2.0/releasenotes.md b/content/riak/ts/1.2.0/releasenotes.md
index fccdd3a4d9..e90684178c 100644
--- a/content/riak/ts/1.2.0/releasenotes.md
+++ b/content/riak/ts/1.2.0/releasenotes.md
@@ -56,5 +56,5 @@ Riak TS is compatible with the following operating systems:
* Negation of an aggregate function returns an error. You can use negation by structuring any aggregate you'd like to negate as follows: `-1*COUNT(...)`.
* Rolling upgrades are not supported.
* AAE must be turned off.
-* Riak Search is not supported.
+* Riak search is not supported.
* Multi-Datacenter Replication is not supported.
\ No newline at end of file
diff --git a/content/riak/ts/1.3.0/releasenotes.md b/content/riak/ts/1.3.0/releasenotes.md
index 173155287c..560867ae27 100644
--- a/content/riak/ts/1.3.0/releasenotes.md
+++ b/content/riak/ts/1.3.0/releasenotes.md
@@ -92,6 +92,6 @@ Riak TS is compatible with the following:
## Known Issues
* AAE must be turned off.
-* Riak Search is not supported for TS data.
+* Riak search is not supported for TS data.
* HTTP API security is not supported. Security checks are included in the code path, but the permissions are not registered with riak_core, so enabling security means disabling any TS functionality. [[code](https://github.com/basho/riak_kv/blob/riak_ts-develop/src/riak_kv_app.erl#L214-L215)]
* Quanta with a '0' or negative integers are not supported and will cause errors.
diff --git a/content/riak/ts/1.3.0/using/querying.md b/content/riak/ts/1.3.0/using/querying.md
index 8a0f83063b..9cd4fa2731 100644
--- a/content/riak/ts/1.3.0/using/querying.md
+++ b/content/riak/ts/1.3.0/using/querying.md
@@ -324,7 +324,7 @@ The following operators are supported for each data type:
* Column to column comparisons are not currently supported.
* Secondary indexing (2i) will not work with Riak TS.
-* Riak Search will not work with Riak TS.
+* Riak search will not work with Riak TS.
* Queries are limited by the number of quanta they can span when specifying the time limits.
diff --git a/content/riak/ts/1.3.1/releasenotes.md b/content/riak/ts/1.3.1/releasenotes.md
index 556f6259c2..00613396d9 100644
--- a/content/riak/ts/1.3.1/releasenotes.md
+++ b/content/riak/ts/1.3.1/releasenotes.md
@@ -111,6 +111,6 @@ Riak TS is compatible with the following:
### Known Issues
* AAE must be turned off.
-* Riak Search is not supported for TS data.
+* Riak search is not supported for TS data.
* HTTP API security is not supported. Security checks are included in the code path, but the permissions are not registered with riak_core, so enabling security means disabling any TS functionality. [[code](https://github.com/basho/riak_kv/blob/riak_ts-develop/src/riak_kv_app.erl#L214-L215)]
* Quanta with a '0' or negative integers are not supported and will cause errors.
diff --git a/content/riak/ts/1.3.1/using/querying.md b/content/riak/ts/1.3.1/using/querying.md
index 7b03162d6e..02ce095a82 100644
--- a/content/riak/ts/1.3.1/using/querying.md
+++ b/content/riak/ts/1.3.1/using/querying.md
@@ -324,7 +324,7 @@ The following operators are supported for each data type:
* Column to column comparisons are not currently supported.
* Secondary indexing (2i) will not work with Riak TS.
-* Riak Search will not work with Riak TS.
+* Riak search will not work with Riak TS.
* Queries are limited by the number of quanta they can span when specifying the time limits.
diff --git a/content/riak/ts/1.4.0/releasenotes.md b/content/riak/ts/1.4.0/releasenotes.md
index 81de75ee7c..13dc616ff7 100644
--- a/content/riak/ts/1.4.0/releasenotes.md
+++ b/content/riak/ts/1.4.0/releasenotes.md
@@ -93,6 +93,6 @@ Riak TS is compatible with the following:
* The list_keys API may be unreliable in clusters containing a mix of TS 1.3.1 and TS 1.4 nodes.
* AAE must be turned off.
-* Riak Search, and subsequently Solr, is not supported for TS.
+* Riak search, and subsequently Solr, is not supported for TS.
* HTTP API security is not supported. Security checks are included in the code path, but the permissions are not registered with riak_core, so enabling security in HTTP means disabling any TS functionality. See the code [here](https://github.com/basho/riak_kv/blob/riak_ts-develop/src/riak_kv_app.erl#L214-L215).
* Bitcask backend must not be used with Riak TS.
diff --git a/content/riak/ts/1.4.0/using/querying/guidelines.md b/content/riak/ts/1.4.0/using/querying/guidelines.md
index 0d6c8cc60b..b361c9b02f 100644
--- a/content/riak/ts/1.4.0/using/querying/guidelines.md
+++ b/content/riak/ts/1.4.0/using/querying/guidelines.md
@@ -136,7 +136,7 @@ The following operators are supported for each data type:
* Column to column comparisons are not currently supported.
* Secondary indexing (2i) will not work with Riak TS.
-* Riak Search will not work with Riak TS.
+* Riak search will not work with Riak TS.
* Queries are limited by the number of quanta they can span when specifying the time limits.
diff --git a/extras/data/blog_post_schema.xml b/extras/data/blog_post_schema.xml
index 5af7f4d986..99fda9a2a3 100644
--- a/extras/data/blog_post_schema.xml
+++ b/extras/data/blog_post_schema.xml
@@ -9,7 +9,7 @@
-
+