diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 323adee518..dd343d7972 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -10,13 +10,6 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- - name: Log current branches and repositories
- run: |
- echo "Current ref: $GITHUB_REF"
- echo "Base ref: $GITHUB_BASE_REF"
- echo "Head ref: $GITHUB_HEAD_REF"
- echo "Repository: $GITHUB_REPOSITORY"
- echo "Head repository: ${{ github.event.pull_request.head.repo.full_name }}"
- name: Only allow pull requests based on master from the develop branch of the current repository
if: ${{ github.base_ref == 'master' && !(github.head_ref == 'develop' && github.event.pull_request.head.repo.full_name == github.repository) }}
run: |
@@ -24,25 +17,27 @@ jobs:
echo "Please check your base branch as it should be develop by default"
exit 1
- uses: actions/checkout@v4
- - uses: actions/setup-python@v4
+ - uses: actions/setup-python@v5
with:
python-version: 3.9
- name: Install Python dependencies
uses: py-actions/py-dependency-install@v4
+ - name: Install Python libs
+ run: pip3 install -r ./requirements.txt
- uses: ruby/setup-ruby@v1
with:
ruby-version: 3.2
bundler-cache: true
- - uses: seanmiddleditch/gha-setup-ninja@v4
+ - uses: seanmiddleditch/gha-setup-ninja@v6
with:
version: 1.10.2
- name: Install arm-none-eabi-gcc GNU Arm Embedded Toolchain
- uses: carlosperate/arm-none-eabi-gcc-action@v1.8.0
+ uses: carlosperate/arm-none-eabi-gcc-action@v1.10.0
- name: Install Doxygen
run: |
- wget https://www.doxygen.nl/files/doxygen-1.9.6.linux.bin.tar.gz
- tar xf doxygen-1.9.6.linux.bin.tar.gz -C "$HOME"
- echo "$HOME/doxygen-1.9.6/bin" >> $GITHUB_PATH
+ wget https://www.doxygen.nl/files/doxygen-1.10.0.linux.bin.tar.gz
+ tar xf doxygen-1.10.0.linux.bin.tar.gz -C "$HOME"
+ echo "$HOME/doxygen-1.10.0/bin" >> $GITHUB_PATH
- name: Build Doxygen documentation
run: make build_doxygen_adoc
- name: Build documentation
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index 108becae14..9ffb99cfcc 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -13,7 +13,7 @@ jobs:
pull-requests: write
steps:
- - uses: actions/stale@v8
+ - uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'
diff --git a/.gitignore b/.gitignore
index 2984493b57..1d7ee958d6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,3 +4,4 @@ build
build-pico-sdk-docs
documentation/html
documentation/asciidoc/pico-sdk
+.venv
diff --git a/.gitmodules b/.gitmodules
index 60c9ade065..9f315972ef 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -6,3 +6,8 @@
path = lib/pico-examples
url = https://github.com/raspberrypi/pico-examples.git
branch = master
+
+[submodule "doxygentoasciidoc"]
+ path = lib/doxygentoasciidoc
+ url = https://github.com/raspberrypi/doxygentoasciidoc.git
+ branch = main
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 794ca9bebc..37fef6b8ea 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,37 +1,163 @@
-# Contributing to Raspberry Pi Documentation
+# Contributing to the Raspberry Pi Documentation
-The Raspberry Pi Documentation website is built from Asciidoc source using Asciidoctor and a Jekyll and Python toolchain. The website is automatically deployed to the raspberrypi.com site — pushed to production — using GitHub Actions when a push to the `master` branch occurs.
+The Raspberry Pi Documentation website is built from Asciidoc source using:
-Full instructions for building and running the documentation website locally can be found in the top-level [README.md](README.md) file.
+* [Asciidoctor](https://asciidoctor.org/)
+* [Jekyll](https://jekyllrb.com/)
+* [jekyll-asciidoc](https://github.com/asciidoctor/jekyll-asciidoc)
+* Python
-## How to Contribute
+The website automatically deploys to [www.raspberrypi.com/documentation](https://www.raspberrypi.com/documentation) using GitHub Actions when new commits appear in the `master` branch.
-In order to contribute new or updated documentation, you must first create a GitHub account and fork the original repository to your own account. You can make changes, save them in your forked repository, then [make a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) against this repository. The pull request will appear [in the repository](https://github.com/raspberrypi/documentation/pulls) where it can be assessed by the maintainers, copy-edited, and if appropriate, merged with the official repository.
+## Contribute
-Unless you are opening a pull request which will only make small corrections, for instance, to correct a typo, you are more likely to get traction for your changes if you [open an issue](https://github.com/raspberrypi/documentation/issues) first to discuss the proposed changes. Issues and Pull Requests older than 60 days will [automatically be marked as stale](https://github.com/actions/stale) and then closed 7 days later if there still hasn't been any further activity.
+To contribute or update documentation:
-**NOTE:** The default [branch](https://github.com/raspberrypi/documentation/branches) of the repository is the `develop` branch, and this should be the branch you get by default when you initially checkout the repository. You should target any pull requests against the `develop` branch, pull requests against the `master` branch will automatically fail checks and not be accepted.
+1. Create a fork of this repository on your GitHub account.
-**NOTE:** Issues and Pull Requests older than 60 days will [automatically be marked as stale](https://github.com/actions/stale) and then closed 7 days later if there still hasn't been any further activity.
+1. Make changes in your fork. Start from the default `develop` branch.
-Before starting to write your contribution to the documentation, you should take a look at the [style guide](https://github.com/raspberrypi/style-guide/blob/master/style-guide.md).
+1. Read our [style guide](https://github.com/raspberrypi/style-guide/blob/master/style-guide.md) to ensure that your changes are consistent with the rest of our documentation. Since Raspberry Pi is a British company, be sure to include all of your extra `u`s and transfigure those `z`s (pronounced 'zeds') into `s`s!
-**IMPORTANT**: Because the documentation makes use of the Asciidoc `include` statement, the `xref:` statements inside the documentation do not link back to the correct pages on Github, as Github does not support Asciidoc include functionality (see [#2005](https://github.com/raspberrypi/documentation/issues/2005)). However, these links work correctly when the HTML documentation is built and deployed. Please do not submit Pull Requests fixing link destinations unless you're sure that the link is broken [on the documentation site](https://www.raspberrypi.com/documentation/) itself.
+1. [Open a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) against this repository.
-## Type of Content
+1. The maintainers will assess and copy-edit the PR. This can take anywhere from a few minutes to a few days, depending on the size of your PR, the time of year, and the availability of the maintainers.
-We welcome contributions from the community, ranging from correcting small typos all the way through to adding entirely new sections to the documentation. However, going forward we're going to be fairly targeted about what sorts of content we add to the documentation. We are looking to keep the repository, and the documentation, focused on Raspberry Pi-specific things, rather than having generic Linux or computing content.
+1. After making any requested improvements to your PR, the maintainers will accept the PR and merge your changes into `develop`.
-We are therefore deprecating the more generic documentation around using the Linux operating system, ahead of removing these sections entirely at some point in the future as part of a larger update to the documentation site. This move is happening as we feel these sort of more general topics are, ten years on from when the documentation was initially written, now much better covered elsewhere on the web.
+1. When the maintainers next release the documentation by merging `develop` into `master`, your changes will go public on the production documentation site.
-As such, we're not accepting PRs against these sections unless they're correcting errors.
+Alternatively, [open an issue](https://github.com/raspberrypi/documentation/issues) to discuss proposed changes.
-**NOTE:** We are willing to consider toolchain-related contributions, but changes to the toolchain may have knock-on effects in other places, so it is possible that apparently benign pull requests that make toolchain changes could be refused for fairly opaque reasons.
+## Build
-## Third-Party Services
+### Install dependencies
-In general, we will not accept content that is specific to an individual third-party service or product. We will also not embed, or add links, to YouTube videos showing tutorials on how to configure your Raspberry Pi.
+To build the Raspberry Pi documentation locally, you'll need Ruby, Python, and the Ninja build system.
-## Licensing
+#### Linux
+
+Use `apt` to install the dependencies:
+
+```console
+$ sudo apt install -y ruby ruby-dev python3 python3-pip make ninja-build
+```
+
+Then, append the following lines to your `~/.bashrc` file (or equivalent shell configuration):
+
+```bash
+export GEM_HOME="$(ruby -e 'puts Gem.user_dir')"
+export PATH="$PATH:$GEM_HOME/bin"
+```
+
+Close and re-launch your terminal window to use the new dependencies and configuration.
+
+#### macOS
+
+If you don't already have it, we recommend installing the [Homebrew](https://brew.sh/) package manager:
+
+```console
+$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
+```
+
+Next, use Homebrew to install Ruby:
+
+```console
+$ brew install ruby
+```
+
+After installing Ruby, follow the instructions provided by Homebrew to make your new Ruby version easily accessible from the command line.
+
+Then, use Homebrew to install the most recent version of Python:
+
+```console
+$ brew install python
+```
+
+Then, install the [Ninja build system](https://formulae.brew.sh/formula/ninja#default):
+
+```console
+$ brew install ninja
+```
+
+### Set up environment
+
+Use the `gem` package manager to install the [Ruby bundler](https://bundler.io/), which this repository uses to manage Ruby dependencies:
+
+```console
+$ gem install bundler
+```
+
+And then install the required Ruby gems:
+
+```console
+$ bundle install
+```
+
+Configure a Python virtual environment for this project:
+
+```console
+$ python -m venv .env
+```
+
+Activate the virtual environment:
+
+```console
+$ source .env/bin/activate
+```
+
+> [!TIP]
+> When you're using a virtual environment, you should see a `(.env)` prefix at the start of your terminal prompt. At any time, run the `deactivate` command to exit the virtual environment.
+
+In the virtual environment, install the required Python modules:
+
+```console
+$ pip3 install -r requirements.txt
+```
+
+### Build HTML
+
+> [!IMPORTANT]
+> If you configured a Python virtual environment as recommended in the previous step, **always** run `source .env/bin/activate` before building. You must activate the virtual environment to access to all of the Python dependencies installed in that virtual environment.
+
+To build the documentation and start a local server to preview the built site, run the following command:
+
+```console
+$ make serve_html
+```
+
+You can access the virtual server at [http://127.0.0.1:4000/documentation/](http://127.0.0.1:4000/documentation/).
+
+> [!TIP]
+> To delete and rebuild the documentation site, run `make clean`, then re-run the build command. You'll need to do this every time you add or remove an Asciidoc, image, or video file.
+
+
+### Build the Pico C SDK Doxygen documentation
+
+The Raspberry Pi documentation site includes a section of generated Asciidoc that we build from the [Doxygen Pico SDK documentation](https://github.com/raspberrypi/pico-sdk).
+
+We use the tooling in this repository and [doxygentoasciidoc](https://github.com/raspberrypi/doxygentoasciidoc) to generate that documentation section. By default, local documentation builds don't include this section because it takes a bit longer to build (tens of seconds) than the rest of the site.
+
+Building the Pico C SDK Doxygen documentation requires the following additional package dependencies:
+
+```console
+$ sudo apt install -y cmake gcc-arm-none-eabi doxygen graphviz
+```
+
+Then, initialise the Git submodules used in the Pico C SDK section build:
+
+```console
+$ git submodule update --init
+```
+
+Run the following command to build the Pico C SDK section Asciidoc files from the Doxygen source:
+
+```console
+$ make build_doxygen_adoc
+```
+
+The next time you build the documentation site, you'll see the Pico C SDK section in your local preview.
+
+> [!TIP]
+> To delete and rebuild the generated files, run `make clean_doxygen_xml`, then re-run the build command.
-The documentation is under a [Creative Commons Attribution-Sharealike](https://creativecommons.org/licenses/by-sa/4.0/) (CC BY-SA 4.0) licence. By contributing content to this repository, you are agreeing to place your contributions under this licence.
diff --git a/Gemfile b/Gemfile
index 3b7916be61..bb73401e41 100644
--- a/Gemfile
+++ b/Gemfile
@@ -8,10 +8,10 @@ source "https://rubygems.org"
#
# This will help ensure the proper Jekyll version is running.
# Happy Jekylling!
-gem "jekyll", "~> 4.3.1"
+gem "jekyll", "~> 4.4.1"
# This is the default theme for new Jekyll sites. You may change this to anything you like.
-gem "minima", "~> 2.0"
+gem "minima", "~> 2.5"
# If you want to use GitHub Pages, remove the "gem "jekyll"" above and
# uncomment the line below. To upgrade, run `bundle update github-pages`.
@@ -21,6 +21,8 @@ gem "minima", "~> 2.0"
group :jekyll_plugins do
gem "jekyll-feed", "~> 0.17"
gem 'jekyll-asciidoc'
+ gem 'asciidoctor'
+ gem 'asciidoctor-tabs', ">= 1.0.0.beta.6"
end
# Windows does not include zoneinfo files, so bundle the tzinfo-data gem
@@ -31,10 +33,10 @@ install_if -> { RUBY_PLATFORM =~ %r!mingw|mswin|java! } do
end
# Performance-booster for watching directories on Windows
-gem "wdm", "~> 0.1.0", :install_if => Gem.win_platform?
+gem "wdm", "~> 0.2.0", :install_if => Gem.win_platform?
-gem "nokogiri", "~> 1.15"
+gem "nokogiri", "~> 1.18"
# So we can add custom element templates
-gem 'slim', '~> 5.2.0'
-gem 'thread_safe', '~> 0.3.5'
\ No newline at end of file
+gem 'slim', '~> 5.2.1'
+gem 'thread_safe', '~> 0.3.5'
diff --git a/Gemfile.lock b/Gemfile.lock
index 8de9a5723e..385a34392c 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -1,32 +1,42 @@
GEM
remote: https://rubygems.org/
specs:
- addressable (2.8.5)
- public_suffix (>= 2.0.2, < 6.0)
- asciidoctor (2.0.20)
+ addressable (2.8.7)
+ public_suffix (>= 2.0.2, < 7.0)
+ asciidoctor (2.0.23)
+ asciidoctor-tabs (1.0.0.beta.6)
+ asciidoctor (>= 2.0.0, < 3.0.0)
+ base64 (0.2.0)
+ bigdecimal (3.1.9)
colorator (1.1.0)
- concurrent-ruby (1.2.2)
+ concurrent-ruby (1.3.5)
+ csv (3.3.2)
em-websocket (0.5.3)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0)
eventmachine (1.2.7)
- ffi (1.16.3)
+ ffi (1.17.1)
forwardable-extended (2.6.0)
- google-protobuf (3.25.0)
+ google-protobuf (4.29.3)
+ bigdecimal
+ rake (>= 13)
http_parser.rb (0.8.0)
- i18n (1.14.1)
+ i18n (1.14.7)
concurrent-ruby (~> 1.0)
- jekyll (4.3.2)
+ jekyll (4.4.1)
addressable (~> 2.4)
+ base64 (~> 0.2)
colorator (~> 1.0)
+ csv (~> 3.0)
em-websocket (~> 0.5)
i18n (~> 1.0)
jekyll-sass-converter (>= 2.0, < 4.0)
jekyll-watch (~> 2.0)
+ json (~> 2.6)
kramdown (~> 2.3, >= 2.3.1)
kramdown-parser-gfm (~> 1.0)
liquid (~> 4.0)
- mercenary (>= 0.3.6, < 0.5)
+ mercenary (~> 0.3, >= 0.3.6)
pathutil (~> 0.9)
rouge (>= 3.0, < 5.0)
safe_yaml (~> 1.0)
@@ -37,44 +47,45 @@ GEM
jekyll (>= 3.0.0)
jekyll-feed (0.17.0)
jekyll (>= 3.7, < 5.0)
- jekyll-sass-converter (3.0.0)
- sass-embedded (~> 1.54)
- jekyll-seo-tag (2.7.1)
+ jekyll-sass-converter (3.1.0)
+ sass-embedded (~> 1.75)
+ jekyll-seo-tag (2.8.0)
jekyll (>= 3.8, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
- kramdown (2.4.0)
- rexml
+ json (2.9.1)
+ kramdown (2.5.1)
+ rexml (>= 3.3.9)
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.4)
- listen (3.8.0)
+ listen (3.9.0)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.4.0)
- mini_portile2 (2.8.5)
- minima (2.5.1)
+ mini_portile2 (2.8.8)
+ minima (2.5.2)
jekyll (>= 3.5, < 5.0)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
- nokogiri (1.15.5)
+ nokogiri (1.18.8)
mini_portile2 (~> 2.8.2)
racc (~> 1.4)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
- public_suffix (5.0.3)
- racc (1.7.3)
- rake (13.1.0)
+ public_suffix (6.0.1)
+ racc (1.8.1)
+ rake (13.2.1)
rb-fsevent (0.11.2)
- rb-inotify (0.10.1)
+ rb-inotify (0.11.1)
ffi (~> 1.0)
- rexml (3.2.6)
- rouge (4.2.0)
+ rexml (3.4.0)
+ rouge (4.5.1)
safe_yaml (1.0.5)
- sass-embedded (1.69.5)
- google-protobuf (~> 3.23)
- rake (>= 13.0.0)
- slim (5.2.0)
+ sass-embedded (1.83.4)
+ google-protobuf (~> 4.29)
+ rake (>= 13)
+ slim (5.2.1)
temple (~> 0.10.0)
tilt (>= 2.1.0)
temple (0.10.3)
@@ -84,26 +95,28 @@ GEM
tilt (2.3.0)
tzinfo (2.0.6)
concurrent-ruby (~> 1.0)
- tzinfo-data (1.2023.3)
+ tzinfo-data (1.2025.2)
tzinfo (>= 1.0.0)
- unicode-display_width (2.5.0)
- wdm (0.1.1)
- webrick (1.8.1)
+ unicode-display_width (2.6.0)
+ wdm (0.2.0)
+ webrick (1.9.1)
PLATFORMS
ruby
DEPENDENCIES
- jekyll (~> 4.3.1)
+ asciidoctor
+ asciidoctor-tabs (>= 1.0.0.beta.6)
+ jekyll (~> 4.4.1)
jekyll-asciidoc
jekyll-feed (~> 0.17)
- minima (~> 2.0)
- nokogiri (~> 1.15)
- slim (~> 5.2.0)
+ minima (~> 2.5)
+ nokogiri (~> 1.18)
+ slim (~> 5.2.1)
thread_safe (~> 0.3.5)
tzinfo (~> 2.0)
tzinfo-data
- wdm (~> 0.1.0)
+ wdm (~> 0.2.0)
BUNDLED WITH
2.3.22
diff --git a/LICENSE.md b/LICENSE.md
index 3cb65d4914..4b2db9cd3d 100644
--- a/LICENSE.md
+++ b/LICENSE.md
@@ -4,7 +4,7 @@ The Raspberry Pi documentation is licensed under a [Creative Commons Attribution
# Creative Commons Attribution-ShareAlike 4.0 International
-Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
+Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
### Using Creative Commons Public Licenses
@@ -12,7 +12,7 @@ Creative Commons public licenses provide a standard set of terms and conditions
* __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors).
-* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).
+* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).
## Creative Commons Attribution-ShareAlike 4.0 International Public License
@@ -66,7 +66,7 @@ a. ___License grant.___
A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
- B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
+ B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter's License You apply.
C. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
@@ -112,7 +112,7 @@ b. ___ShareAlike.___
In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
-1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.
+1. The Adapter's License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.
2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
@@ -170,6 +170,6 @@ c. No term or condition of this Public License will be waived and no failure to
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
-> Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the [CC0 Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/legalcode). Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
+> Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the [CC0 Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/legalcode). Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
>
> Creative Commons may be contacted at creativecommons.org.
\ No newline at end of file
diff --git a/Makefile b/Makefile
index 2341a582c9..711219f453 100644
--- a/Makefile
+++ b/Makefile
@@ -16,9 +16,10 @@ AUTO_NINJABUILD = $(BUILD_DIR)/autogenerated.ninja
PICO_SDK_DIR = lib/pico-sdk
PICO_EXAMPLES_DIR = lib/pico-examples
+DOXYGEN_TO_ASCIIDOC_DIR = lib/doxygentoasciidoc
ALL_SUBMODULE_CMAKELISTS = $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_EXAMPLES_DIR)/CMakeLists.txt
DOXYGEN_PICO_SDK_BUILD_DIR = build-pico-sdk-docs
-DOXYGEN_HTML_DIR = $(DOXYGEN_PICO_SDK_BUILD_DIR)/docs/doxygen/html
+DOXYGEN_XML_DIR = $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined/docs/doxygen/xml
# The pico-sdk here needs to match up with the "from_json" entry in index.json
ASCIIDOC_DOXYGEN_DIR = $(ASCIIDOC_DIR)/pico-sdk
@@ -26,7 +27,7 @@ JEKYLL_CMD = bundle exec jekyll
.DEFAULT_GOAL := html
-.PHONY: clean run_ninja clean_ninja html serve_html clean_html build_doxygen_html clean_doxygen_html build_doxygen_adoc clean_doxygen_adoc fetch_submodules clean_submodules clean_everything
+.PHONY: clean run_ninja clean_ninja html serve_html clean_html build_doxygen_xml clean_doxygen_xml build_doxygen_adoc clean_doxygen_adoc fetch_submodules clean_submodules clean_everything
$(BUILD_DIR):
@mkdir -p $@
@@ -50,33 +51,43 @@ $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_SDK_DIR)/docs/index.h: | $(PICO_SDK_DIR)
$(PICO_EXAMPLES_DIR)/CMakeLists.txt: | $(PICO_SDK_DIR)/CMakeLists.txt $(PICO_EXAMPLES_DIR)
git submodule update --init $(PICO_EXAMPLES_DIR)
-fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS)
+# Initialise doxygentoasciidoc submodule
+$(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py:
+ git submodule update --init $(DOXYGEN_TO_ASCIIDOC_DIR)
+
+fetch_submodules: $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py
# Get rid of the submodules
clean_submodules:
git submodule deinit --all
-# Create the pico-sdk Doxygen HTML files
-$(DOXYGEN_HTML_DIR): | $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_PICO_SDK_BUILD_DIR)
- cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR) -DPICO_EXAMPLES_PATH=../$(PICO_EXAMPLES_DIR)
- $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR) docs
- test -d "$@"
+# Create the pico-sdk Doxygen XML files
+$(DOXYGEN_XML_DIR) $(DOXYGEN_XML_DIR)/index.xml: | $(ALL_SUBMODULE_CMAKELISTS) $(DOXYGEN_PICO_SDK_BUILD_DIR)
+ cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=combined-docs
+ cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=rp2040
+ cmake -S $(PICO_SDK_DIR) -B $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 -D PICO_EXAMPLES_PATH=../../$(PICO_EXAMPLES_DIR) -D PICO_NO_PICOTOOL=1 -D PICO_PLATFORM=rp2350
+ $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/combined docs
+ $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2040 docs
+ $(MAKE) -C $(DOXYGEN_PICO_SDK_BUILD_DIR)/PICO_RP2350 docs
+ python3 $(SCRIPTS_DIR)/postprocess_doxygen_xml.py $(DOXYGEN_PICO_SDK_BUILD_DIR)
-$(DOXYGEN_PICO_SDK_BUILD_DIR)/docs/Doxyfile: | $(DOXYGEN_HTML_DIR)
+$(DOXYGEN_PICO_SDK_BUILD_DIR)/combined/docs/Doxyfile: | $(DOXYGEN_XML_DIR)
-build_doxygen_html: | $(DOXYGEN_HTML_DIR)
+build_doxygen_xml: | $(DOXYGEN_XML_DIR)
# Clean all the Doxygen HTML files
-clean_doxygen_html:
+clean_doxygen_xml:
rm -rf $(DOXYGEN_PICO_SDK_BUILD_DIR)
-# Create the Doxygen asciidoc files
-# Also need to move index.adoc to a different name, because it conflicts with the autogenerated index.adoc
-$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(SCRIPTS_DIR)/transform_doxygen_html.py $(PICO_SDK_DIR)/docs/index.h $(DOXYGEN_PICO_SDK_BUILD_DIR)/docs/Doxyfile | $(DOXYGEN_HTML_DIR) $(ASCIIDOC_DOXYGEN_DIR)
+# create the sdk adoc and the json file
+$(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc: $(ASCIIDOC_DOXYGEN_DIR) $(DOXYGEN_XML_DIR)/index.xml $(DOXYGEN_TO_ASCIIDOC_DIR)/__main__.py $(DOXYGEN_TO_ASCIIDOC_DIR)/cli.py $(DOXYGEN_TO_ASCIIDOC_DIR)/nodes.py $(DOXYGEN_TO_ASCIIDOC_DIR)/helpers.py | $(BUILD_DIR) $(DOXYGEN_TO_ASCIIDOC_DIR)/requirements.txt
$(MAKE) clean_ninja
- $< $(DOXYGEN_HTML_DIR) $(ASCIIDOC_DOXYGEN_DIR) $(PICO_SDK_DIR)/docs/index.h $(ASCIIDOC_DOXYGEN_DIR)/picosdk_index.json
- cp $(DOXYGEN_HTML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR)
- mv $(ASCIIDOC_DOXYGEN_DIR)/index.adoc $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc
+ pip3 install -r $(DOXYGEN_TO_ASCIIDOC_DIR)/requirements.txt
+ PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -o $(ASCIIDOC_DOXYGEN_DIR)/all_groups.adoc $(DOXYGEN_XML_DIR)/index.xml
+ PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -c -o $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc $(DOXYGEN_XML_DIR)/indexpage.xml
+ PYTHONPATH=$(DOXYGEN_TO_ASCIIDOC_DIR)/.. python3 -m doxygentoasciidoc -c -o $(ASCIIDOC_DOXYGEN_DIR)/examples_page.adoc $(DOXYGEN_XML_DIR)/examples_page.xml
+ python3 $(SCRIPTS_DIR)/postprocess_doxygen_adoc.py $(ASCIIDOC_DOXYGEN_DIR)
+ -cp $(DOXYGEN_XML_DIR)/*.png $(ASCIIDOC_DOXYGEN_DIR) 2>/dev/null || true
build_doxygen_adoc: $(ASCIIDOC_DOXYGEN_DIR)/index_doxygen.adoc
@@ -85,7 +96,7 @@ clean_doxygen_adoc:
if [ -d $(ASCIIDOC_DOXYGEN_DIR) ]; then $(MAKE) clean_ninja; fi
rm -rf $(ASCIIDOC_DOXYGEN_DIR)
-clean_everything: clean_submodules clean_doxygen_html clean
+clean_everything: clean_submodules clean_doxygen_xml clean
# AUTO_NINJABUILD contains all the parts of the ninjabuild where the rules themselves depend on other files
$(AUTO_NINJABUILD): $(SCRIPTS_DIR)/create_auto_ninjabuild.py $(DOCUMENTATION_INDEX) $(SITE_CONFIG) | $(BUILD_DIR)
@@ -107,7 +118,7 @@ html: run_ninja
# Build the html output files and additionally run a small webserver for local previews
serve_html: run_ninja
- $(JEKYLL_CMD) serve
+ $(JEKYLL_CMD) serve --watch
# Delete all the files created by the 'html' target
clean_html:
diff --git a/README.md b/README.md
index 74cda7db86..69df3a28f8 100644
--- a/README.md
+++ b/README.md
@@ -1,175 +1,22 @@
-# Welcome to the Raspberry Pi Documentation
+
+
+
+
+
+
-This repository contains the Asciidoc source and the toolchain to build the [Raspberry Pi Documentation](https://www.raspberrypi.com/documentation/). For details of how to contribute to the documentation see the [CONTRIBUTING.md](CONTRIBUTING.md) file.
+[Website][Raspberry Pi] | [Getting started] | [Documentation] | [Contribute]
+
-**NOTE:** This repository has undergone some recent changes. See our [blog post](https://www.raspberrypi.com/blog/bring-on-the-documentation/) for more details.
+This repository contains the source and tools used to build the [Raspberry Pi Documentation](https://www.raspberrypi.com/documentation/).
-## Building the Documentation
-
-Instructions on how to checkout the `documentation` repo, and then install the toolchain needed to convert from Asciidoc to HTML and build the documentation site.
-
-### Checking out the Repository
-
-Install `git` if you don't already have it, and check out the `documentation` repo as follows,
-```
-$ git clone https://github.com/raspberrypi/documentation.git
-$ cd documentation
-```
-
-### Installing the Toolchain
-
-#### On Linux
-
-This works on both regular Debian or Ubuntu Linux — and has been tested in a minimal Docker container — and also under Raspberry Pi OS if you are working from a Raspberry Pi.
-
-You can install the necessary dependencies on Linux as follows,
-
-```
-$ sudo apt install -y ruby ruby-dev python3 python3-pip make ninja-build
-```
-
-then add these lines to the bottom of your `$HOME/.bashrc`,
-```
-export GEM_HOME="$(ruby -e 'puts Gem.user_dir')"
-export PATH="$PATH:$GEM_HOME/bin"
-```
-
-and close and relaunch your Terminal window to have these new variables activated. Finally, run
-```
-$ gem install bundler
-```
-to install the latest version of the Ruby `bundle` command.
-
-#### On macOS
-
-If you don't already have it, install the [Homebrew](https://brew.sh/) package manager:
-
-```
-$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
-```
-
-Next, install Ruby:
-
-```
-$ brew install ruby
-```
-
-And install the [Ruby bundler](https://bundler.io/):
-
-```
-$ gem install bundler
-```
-
-##### Set up Homebrew Version of Ruby
-
-Because macOS provides its own version of Ruby, Homebrew doesn't automatically set up symlinks to access the version you just installed with the `ruby` command. But after a successful install, Homebrew outputs the commands you'll need to run to set up the symlink yourself. If you use the default macOS `zsh` shell on Apple Silicon, you can set up the symlink with the following command:
-
-```
-$ echo 'export PATH="/opt/homebrew/opt/ruby/bin:$PATH"' >> ~/.zshrc
-```
-
-If you run macOS on an Intel-based Mac, replace `opt/homebrew` with `usr/local` in the above command.
-
-If you run a shell other than the default, check which config file to modify for the command. For instance, `bash` uses `~/.bashrc` or `~/.bash_profile`.
-
-Once you've made the changes to your shell configuration, open a new terminal instance and run the following command:
-
-```
-$ ruby --version
-```
-
-You should see output similar to the following:
-
-```
-ruby 3.2.2 (2023-03-30 revision e51014f9c0) [arm64-darwin22]
-```
-
-As long as you see a Ruby version greater than or equal to 3.2.2, you've succeeded.
-
-##### Install Homebrew Dependencies
-
-Next, use Homebrew to install the other dependencies.
-Start with the latest version of Python:
-
-```
-$ brew install python@3
-```
-
-Then install the [Ninja build system](https://formulae.brew.sh/formula/ninja#default):
-
-```
-$ brew install ninja
-```
-
-Then install the [Gumbo HTML5 parser](https://formulae.brew.sh/formula/gumbo-parser#default):
-
-```
-$ brew install gumbo-parser
-```
-
-And finally, install the [YAML module for Python 3](https://formulae.brew.sh/formula/pyyaml#default):
-
-```
-$ pip3 install pyyaml
-```
-
-Now you've installed all of the dependencies you'll need from Homebrew.
-
-### Install Scripting Dependencies
-
-After installing the toolchain, install the required Ruby gems and Python modules. Make sure you're in the top-level directory of this repository (the one containing `Gemfile.lock` and `requirements.txt`), and run the following command to install the Ruby gems (this may take several minutes):
-
-```
-$ bundle install
-```
-
-Then, run the following command to install the remaining required Python modules:
-
-```
-$ pip3 install --user -r requirements.txt
-```
-
-### Building the Documentation Site
-
-After you've installed both the toolchain and scripting dependencies, you can build the documentation with the following command:
-
-```
-$ make
-```
-
-This automatically uses [Ninja build](https://ninja-build.org/) to convert the source files in `documentation/asciidoc/` to a suitable intermediate structure in `build/jekyll/` and then uses [Jekyll AsciiDoc](https://github.com/asciidoctor/jekyll-asciidoc) to convert the files in `build/jekyll/` to the final output HTML files in `documentation/html/`.
-
-You can also start a local server to view the built site:
-
-```
-$ make serve_html
-```
-
-As the local server launches, the local URL will be printed in the terminal -- open this URL in a browser to see the locally-built site.
-
-You can also use `make` to delete the `build/` and `documentation/html/` directories:
-
-```
-$ make clean
-```
-
-### Building with Doxygen
-
-If you want to build the Pico C SDK Doxygen documentation alongside the main documentation site you can do so with,
-
-```
-$ make build_doxygen_adoc
-$ make
-```
-
-and clean up afterwards by using,
-
-```
-$ make clean_everything
-```
-
-which will revert the repository to a pristine state.
+[Raspberry Pi]: https://www.raspberrypi.com/
+[Getting Started]: https://www.raspberrypi.com/documentation/computers/getting-started.html
+[Documentation]: https://www.raspberrypi.com/documentation/
+[Contribute]: CONTRIBUTING.md
## Licence
-The Raspberry Pi [documentation](./documentation/) is [licensed](https://github.com/raspberrypi/documentation/blob/develop/LICENSE.md) under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA). While the toolchain source code — which is everything outside of the top-level `documentation/` subdirectory — is Copyright © 2021 Raspberry Pi Ltd and licensed under the [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) licence.
+The Raspberry Pi documentation is [licensed](https://github.com/raspberrypi/documentation/blob/develop/LICENSE.md) under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA). Documentation tools (everything outside of the `documentation/` subdirectory) are licensed under the [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) licence.
diff --git a/_config.yml b/_config.yml
index acfa472d3d..4d740515b5 100644
--- a/_config.yml
+++ b/_config.yml
@@ -17,14 +17,16 @@ title: Raspberry Pi Documentation
description: >- # this means to ignore newlines until "baseurl:"
Raspberry Pi Documentation.
baseurl: "/documentation" # the subpath of your site, e.g. /blog
-url: "" # the base hostname & protocol for your site, e.g. http://example.com
+url: "https://www.raspberrypi.com/documentation" # the base hostname & protocol for your site, e.g. http://example.com
githuburl: "https://github.com/raspberrypi/documentation/"
+mainsite: https://raspberrypi.com/
githubbranch: master
githubbranch_edit: develop
# Build settings
theme: minima
plugins:
+ - asciidoctor-tabs
- jekyll-asciidoc
- jekyll-feed
diff --git a/documentation/asciidoc/accessories/ai-camera.adoc b/documentation/asciidoc/accessories/ai-camera.adoc
new file mode 100644
index 0000000000..55d35cba57
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera.adoc
@@ -0,0 +1,7 @@
+include::ai-camera/about.adoc[]
+
+include::ai-camera/getting-started.adoc[]
+
+include::ai-camera/details.adoc[]
+
+include::ai-camera/model-conversion.adoc[]
diff --git a/documentation/asciidoc/accessories/ai-camera/about.adoc b/documentation/asciidoc/accessories/ai-camera/about.adoc
new file mode 100644
index 0000000000..927fcf19ab
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/about.adoc
@@ -0,0 +1,9 @@
+[[ai-camera]]
+== About
+
+The Raspberry Pi AI Camera uses the Sony IMX500 imaging sensor to provide low-latency, high-performance AI capabilities to any camera application. Tight integration with xref:../computers/camera_software.adoc[Raspberry Pi's camera software stack] allows users to deploy their own neural network models with minimal effort.
+
+image::images/ai-camera.png[The Raspberry Pi AI Camera]
+
+This section demonstrates how to run either a pre-packaged or custom neural network model on the camera. Additionally, this section includes the steps required to interpret inference data generated by neural networks running on the IMX500 in https://github.com/raspberrypi/rpicam-apps[`rpicam-apps`] and https://github.com/raspberrypi/picamera2[Picamera2].
+
diff --git a/documentation/asciidoc/accessories/ai-camera/details.adoc b/documentation/asciidoc/accessories/ai-camera/details.adoc
new file mode 100644
index 0000000000..e640f289c9
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/details.adoc
@@ -0,0 +1,262 @@
+
+== Under the hood
+
+=== Overview
+
+The Raspberry Pi AI Camera works differently from traditional AI-based camera image processing systems, as shown in the diagram below:
+
+image::images/imx500-comparison.svg[Traditional versus IMX500 AI camera systems]
+
+The left side demonstrates the architecture of a traditional AI camera system. In such a system, the camera delivers images to the Raspberry Pi. The Raspberry Pi processes the images and then performs AI inference. Traditional systems may use external AI accelerators (as shown) or rely exclusively on the CPU.
+
+The right side demonstrates the architecture of a system that uses IMX500. The camera module contains a small Image Signal Processor (ISP) which turns the raw camera image data into an **input tensor**. The camera module sends this tensor directly into the AI accelerator within the camera, which produces **output tensors** that contain the inferencing results. The AI accelerator sends these tensors to the Raspberry Pi. There is no need for an external accelerator, nor for the Raspberry Pi to run neural network software on the CPU.
+
+To fully understand this system, familiarise yourself with the following concepts:
+
+Input Tensor:: The part of the sensor image passed to the AI engine for inferencing. Produced by a small on-board ISP which also crops and scales the camera image to the dimensions expected by the neural network that has been loaded. The input tensor is not normally made available to applications, though it is possible to access it for debugging purposes.
+
+Region of Interest (ROI):: Specifies exactly which part of the sensor image is cropped out before being rescaled to the size demanded by the neural network. Can be queried and set by an application. The units used are always pixels in the full resolution sensor output. The default ROI setting uses the full image received from the sensor, cropping no data.
+
+Output Tensors:: The results of inferencing performed by the neural network. The precise number and shape of the outputs depend on the neural network. Application code must understand how to handle the tensors.
+
+=== System architecture
+
+The diagram below shows the various camera software components (in green) used during our imaging/inference use case with the Raspberry Pi AI Camera module hardware (in red):
+
+image::images/imx500-block-diagram.svg[IMX500 block diagram]
+
+At startup, the IMX500 sensor module loads firmware to run a particular neural network model. During streaming, the IMX500 generates _both_ an image stream and an inference stream. This inference stream holds the inputs and outputs of the neural network model, also known as input/output **tensors**.
+
+=== Device drivers
+
+At the lowest level, the the IMX500 sensor kernel driver configures the camera module over the I2C bus. The CSI2 driver (`CFE` on Pi 5, `Unicam` on all other Pi platforms) sets up the receiver to write the image data stream into a frame buffer, together with the embedded data and inference data streams into another buffer in memory.
+
+The firmware files also transfer over the I2C bus wires. On most devices, this uses the standard I2C protocol, but Raspberry Pi 5 uses a custom high speed protocol. The RP2040 SPI driver in the kernel handles firmware file transfer, since the transfer uses the RP2040 microcontroller. The microcontroller bridges the I2C transfers from the kernel to the IMX500 via a SPI bus. Additionally, the RP2040 caches firmware files in on-board storage. This avoids the need to transfer entire firmware blobs over the I2C bus, significantly speeding up firmware loading for firmware you've already used.
+
+=== `libcamera`
+
+Once `libcamera` dequeues the image and inference data buffers from the kernel, the IMX500 specific `cam-helper` library (part of the Raspberry Pi IPA within `libcamera`) parses the inference buffer to access the input/output tensors. These tensors are packaged as Raspberry Pi vendor-specific https://libcamera.org/api-html/namespacelibcamera_1_1controls.html[`libcamera` controls]. `libcamera` returns the following controls:
+
+[%header,cols="a,a"]
+|===
+| Control
+| Description
+
+| `CnnOutputTensor`
+| Floating point array storing the output tensors.
+
+| `CnnInputTensor`
+| Floating point array storing the input tensor.
+
+| `CnnOutputTensorInfo`
+| Network specific parameters describing the output tensors' structure:
+
+[source,c]
+----
+struct OutputTensorInfo {
+ uint32_t tensorDataNum;
+ uint32_t numDimensions;
+ uint16_t size[MaxNumDimensions];
+};
+
+struct CnnOutputTensorInfo {
+ char networkName[NetworkNameLen];
+ uint32_t numTensors;
+ OutputTensorInfo info[MaxNumTensors];
+};
+----
+
+| `CnnInputTensorInfo`
+| Network specific parameters describing the input tensor's structure:
+
+[source,c]
+----
+struct CnnInputTensorInfo {
+ char networkName[NetworkNameLen];
+ uint32_t width;
+ uint32_t height;
+ uint32_t numChannels;
+};
+----
+
+|===
+
+=== `rpicam-apps`
+
+`rpicam-apps` provides an IMX500 post-processing stage base class that implements helpers for IMX500 post-processing stages: https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_post_processing_stage.hpp[`IMX500PostProcessingStage`]. Use this base class to derive a new post-processing stage for any neural network model running on the IMX500. For an example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_object_detection.cpp[`imx500_object_detection.cpp`]:
+
+[source,cpp]
+----
+class ObjectDetection : public IMX500PostProcessingStage
+{
+public:
+ ObjectDetection(RPiCamApp *app) : IMX500PostProcessingStage(app) {}
+
+ char const *Name() const override;
+
+ void Read(boost::property_tree::ptree const ¶ms) override;
+
+ void Configure() override;
+
+ bool Process(CompletedRequestPtr &completed_request) override;
+};
+----
+
+For every frame received by the application, the `Process()` function is called (`ObjectDetection::Process()` in the above case). In this function, you can extract the output tensor for further processing or analysis:
+
+[source,cpp]
+----
+auto output = completed_request->metadata.get(controls::rpi::CnnOutputTensor);
+if (!output)
+{
+ LOG_ERROR("No output tensor found in metadata!");
+ return false;
+}
+
+std::vector output_tensor(output->data(), output->data() + output->size());
+----
+
+Once completed, the final results can either be visualised or saved in metadata and consumed by either another downstream stage, or the top level application itself. In the object inference case:
+
+[source,cpp]
+----
+if (objects.size())
+ completed_request->post_process_metadata.Set("object_detect.results", objects);
+----
+
+The `object_detect_draw_cv` post-processing stage running downstream fetches these results from the metadata and draws the bounding boxes onto the image in the `ObjectDetectDrawCvStage::Process()` function:
+
+[source,cpp]
+----
+std::vector detections;
+completed_request->post_process_metadata.Get("object_detect.results", detections);
+----
+
+The following table contains a full list of helper functions provided by `IMX500PostProcessingStage`:
+
+[%header,cols="a,a"]
+|===
+| Function
+| Description
+
+| `Read()`
+| Typically called from `::Read()`, this function reads the config parameters for input tensor parsing and saving.
+
+This function also reads the neural network model file string (`"network_file"`) and sets up the firmware to be loaded on camera open.
+
+| `Process()`
+| Typically called from `::Process()` this function processes and saves the input tensor to a file if required by the JSON config file.
+
+| `SetInferenceRoiAbs()`
+| Sets an absolute region of interest (ROI) crop rectangle on the sensor image to use for inferencing on the IMX500.
+
+| `SetInferenceRoiAuto()`
+| Automatically calculates region of interest (ROI) crop rectangle on the sensor image to preserve the input tensor aspect ratio for a given neural network.
+
+| `ShowFwProgressBar()`
+| Displays a progress bar on the console showing the progress of the neural network firmware upload to the IMX500.
+
+| `ConvertInferenceCoordinates()`
+| Converts from the input tensor coordinate space to the final ISP output image space.
+
+There are a number of scaling/cropping/translation operations occurring from the original sensor image to the fully processed ISP output image. This function converts coordinates provided by the output tensor to the equivalent coordinates after performing these operations.
+
+|===
+
+=== Picamera2
+
+IMX500 integration in Picamera2 is very similar to what is available in `rpicam-apps`. Picamera2 has an IMX500 helper class that provides the same functionality as the `rpicam-apps` `IMX500PostProcessingStage` base class. This can be imported to any Python script with:
+
+[source,python]
+----
+from picamera2.devices.imx500 import IMX500
+
+# This must be called before instantiation of Picamera2
+imx500 = IMX500(model_file)
+----
+
+To retrieve the output tensors, fetch them from the controls. You can then apply additional processing in your Python script.
+
+For example, in an object inference use case such as https://github.com/raspberrypi/picamera2/tree/main/examples/imx500/imx500_object_detection_demo.py[imx500_object_detection_demo.py], the object bounding boxes and confidence values are extracted in `parse_detections()` and draw the boxes on the image in `draw_detections()`:
+
+[source,python]
+----
+class Detection:
+ def __init__(self, coords, category, conf, metadata):
+ """Create a Detection object, recording the bounding box, category and confidence."""
+ self.category = category
+ self.conf = conf
+ obj_scaled = imx500.convert_inference_coords(coords, metadata, picam2)
+ self.box = (obj_scaled.x, obj_scaled.y, obj_scaled.width, obj_scaled.height)
+
+def draw_detections(request, detections, stream="main"):
+ """Draw the detections for this request onto the ISP output."""
+ labels = get_labels()
+ with MappedArray(request, stream) as m:
+ for detection in detections:
+ x, y, w, h = detection.box
+ label = f"{labels[int(detection.category)]} ({detection.conf:.2f})"
+ cv2.putText(m.array, label, (x + 5, y + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
+ cv2.rectangle(m.array, (x, y), (x + w, y + h), (0, 0, 255, 0))
+ if args.preserve_aspect_ratio:
+ b = imx500.get_roi_scaled(request)
+ cv2.putText(m.array, "ROI", (b.x + 5, b.y + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
+ cv2.rectangle(m.array, (b.x, b.y), (b.x + b.width, b.y + b.height), (255, 0, 0, 0))
+
+def parse_detections(request, stream='main'):
+ """Parse the output tensor into a number of detected objects, scaled to the ISP output."""
+ outputs = imx500.get_outputs(request.get_metadata())
+ boxes, scores, classes = outputs[0][0], outputs[1][0], outputs[2][0]
+ detections = [ Detection(box, category, score, metadata)
+ for box, score, category in zip(boxes, scores, classes) if score > threshold]
+ draw_detections(request, detections, stream)
+----
+
+Unlike the `rpicam-apps` example, this example applies no additional hysteresis or temporal filtering.
+
+The IMX500 class in Picamera2 provides the following helper functions:
+
+[%header,cols="a,a"]
+|===
+| Function
+| Description
+
+| `IMX500.get_full_sensor_resolution()`
+| Return the full sensor resolution of the IMX500.
+
+| `IMX500.config`
+| Returns a dictionary of the neural network configuration.
+
+| `IMX500.convert_inference_coords(coords, metadata, picamera2)`
+| Converts the coordinates _coords_ from the input tensor coordinate space to the final ISP output image space. Must be passed Picamera2's image metadata for the image, and the Picamera2 object.
+
+There are a number of scaling/cropping/translation operations occurring from the original sensor image to the fully processed ISP output image. This function converts coordinates provided by the output tensor to the equivalent coordinates after performing these operations.
+
+| `IMX500.show_network_fw_progress_bar()`
+| Displays a progress bar on the console showing the progress of the neural network firmware upload to the IMX500.
+
+| `IMX500.get_roi_scaled(request)`
+| Returns the region of interest (ROI) in the ISP output image coordinate space.
+
+| `IMX500.get_isp_output_size(picamera2)`
+| Returns the ISP output image size.
+
+| `IMX5000.get_input_size()`
+| Returns the input tensor size based on the neural network model used.
+
+| `IMX500.get_outputs(metadata)`
+| Returns the output tensors from the Picamera2 image metadata.
+
+| `IMX500.get_output_shapes(metadata)`
+| Returns the shape of the output tensors from the Picamera2 image metadata for the neural network model used.
+
+| `IMX500.set_inference_roi_abs(rectangle)`
+| Sets the region of interest (ROI) crop rectangle which determines which part of the sensor image is converted to the input tensor that is used for inferencing on the IMX500. The region of interest should be specified in units of pixels at the full sensor resolution, as a `(x_offset, y_offset, width, height)` tuple.
+
+| `IMX500.set_inference_aspect_ratio(aspect_ratio)`
+| Automatically calculates region of interest (ROI) crop rectangle on the sensor image to preserve the given aspect ratio. To make the ROI aspect ratio exactly match the input tensor for this network, use `imx500.set_inference_aspect_ratio(imx500.get_input_size())`.
+
+| `IMX500.get_kpi_info(metadata)`
+| Returns the frame-level performance indicators logged by the IMX500 for the given image metadata.
+
+|===
diff --git a/documentation/asciidoc/accessories/ai-camera/getting-started.adoc b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc
new file mode 100644
index 0000000000..b237208957
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc
@@ -0,0 +1,141 @@
+== Getting started
+
+The instructions below describe how to run the pre-packaged MobileNet SSD and PoseNet neural network models on the Raspberry Pi AI Camera.
+
+=== Hardware setup
+
+Attach the camera to your Raspberry Pi 5 board following the instructions at xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[Install a Raspberry Pi Camera].
+
+=== Prerequisites
+
+These instructions assume you are using the AI Camera attached to either a Raspberry Pi 4 Model B or Raspberry Pi 5 board. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+.
+
+First, ensure that your Raspberry Pi runs the latest software. Run the following command to update:
+
+[source,console]
+----
+$ sudo apt update && sudo apt full-upgrade
+----
+
+=== Install the IMX500 firmware
+
+The AI camera must download runtime firmware onto the IMX500 sensor during startup. To install these firmware files onto your Raspberry Pi, run the following command:
+
+[source,console]
+----
+$ sudo apt install imx500-all
+----
+
+This command:
+
+* installs the `/lib/firmware/imx500_loader.fpk` and `/lib/firmware/imx500_firmware.fpk` firmware files required to operate the IMX500 sensor
+* places a number of neural network model firmware files in `/usr/share/imx500-models/`
+* installs the IMX500 post-processing software stages in `rpicam-apps`
+* installs the Sony network model packaging tools
+
+NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts. This may take several minutes if the neural network model firmware has not been previously cached. The demos below display a progress bar on the console to indicate firmware loading progress.
+
+=== Reboot
+
+Now that you've installed the prerequisites, restart your Raspberry Pi:
+
+[source,console]
+----
+$ sudo reboot
+----
+
+== Run example applications
+
+Once all the system packages are updated and firmware files installed, we can start running some example applications. As mentioned earlier, the Raspberry Pi AI Camera integrates fully with `libcamera`, `rpicam-apps`, and `Picamera2`.
+
+=== `rpicam-apps`
+
+The xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps` camera applications] include IMX500 object detection and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see xref:../computers/camera_software.adoc#post-process-file[the post-processing documentation].
+
+The examples on this page use post-processing JSON files located in `/usr/share/rpi-camera-assets/`.
+
+==== Object detection
+
+The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. `imx500_mobilenet_ssd.json` contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network.
+
+`imx500_mobilenet_ssd.json` declares a post-processing pipeline that contains two stages:
+
+. `imx500_object_detection`, which picks out bounding boxes and confidence values generated by the neural network in the output tensor
+. `object_detect_draw_cv`, which draws bounding boxes and labels on the image
+
+The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera.
+
+The following command runs `rpicam-hello` with object detection post-processing:
+
+[source,console]
+----
+$ rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30
+----
+
+After running the command, you should see a viewfinder that overlays bounding boxes on objects recognised by the neural network:
+
+image::images/imx500-mobilenet.jpg[IMX500 MobileNet]
+
+To record video with object detection overlays, use `rpicam-vid` instead:
+
+[source,console]
+----
+$ rpicam-vid -t 10s -o output.264 --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --width 1920 --height 1080 --framerate 30
+----
+
+You can configure the `imx500_object_detection` stage in many ways.
+
+For example, `max_detections` defines the maximum number of objects that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider any input as an object.
+
+The raw inference output data of this network can be quite noisy, so this stage also preforms some temporal filtering and applies hysteresis. To disable this filtering, remove the `temporal_filter` config block.
+
+==== Pose estimation
+
+The PoseNet neural network performs pose estimation, labelling key points on the body associated with joints and limbs. `imx500_posenet.json` contains the configuration parameters for the IMX500 pose estimation post-processing stage using the PoseNet neural network.
+
+`imx500_posenet.json` declares a post-processing pipeline that contains two stages:
+
+* `imx500_posenet`, which fetches the raw output tensor from the PoseNet neural network
+* `plot_pose_cv`, which draws line overlays on the image
+
+The AI Camera performs basic detection, but the output tensor requires additional post-processing on your host Raspberry Pi to produce final output.
+
+The following command runs `rpicam-hello` with pose estimation post-processing:
+
+[source,console]
+----
+$ rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_posenet.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30
+----
+
+image::images/imx500-posenet.jpg[IMX500 PoseNet]
+
+You can configure the `imx500_posenet` stage in many ways.
+
+For example, `max_detections` defines the maximum number of bodies that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider input as a body.
+
+=== Picamera2
+
+For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see https://github.com/raspberrypi/picamera2/blob/main/examples/imx500/[the `picamera2` GitHub repository].
+
+Most of the examples use OpenCV for some additional processing. To install the dependencies required to run OpenCV, run the following command:
+
+[source,console]
+----
+$ sudo apt install python3-opencv python3-munkres
+----
+
+Now download the https://github.com/raspberrypi/picamera2[the `picamera2` repository] to your Raspberry Pi to run the examples. You'll find example files in the root directory, with additional information in the `README.md` file.
+
+Run the following script from the repository to run YOLOv8 object detection:
+
+[source,console]
+----
+$ python imx500_object_detection_demo.py --model /usr/share/imx500-models/imx500_network_ssd_mobilenetv2_fpnlite_320x320_pp.rpk
+----
+
+To try pose estimation in Picamera2, run the following script from the repository:
+
+[source,console]
+----
+$ python imx500_pose_estimation_higherhrnet_demo.py
+----
diff --git a/documentation/asciidoc/accessories/ai-camera/images/ai-camera.png b/documentation/asciidoc/accessories/ai-camera/images/ai-camera.png
new file mode 100644
index 0000000000..a0186287cb
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-camera/images/ai-camera.png differ
diff --git a/documentation/asciidoc/accessories/ai-camera/images/imx500-block-diagram.svg b/documentation/asciidoc/accessories/ai-camera/images/imx500-block-diagram.svg
new file mode 100644
index 0000000000..142854adb0
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/images/imx500-block-diagram.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/documentation/asciidoc/accessories/ai-camera/images/imx500-comparison.svg b/documentation/asciidoc/accessories/ai-camera/images/imx500-comparison.svg
new file mode 100644
index 0000000000..5355ecb23d
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/images/imx500-comparison.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/documentation/asciidoc/accessories/ai-camera/images/imx500-mobilenet.jpg b/documentation/asciidoc/accessories/ai-camera/images/imx500-mobilenet.jpg
new file mode 100644
index 0000000000..871f7b9eb0
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-camera/images/imx500-mobilenet.jpg differ
diff --git a/documentation/asciidoc/accessories/ai-camera/images/imx500-posenet.jpg b/documentation/asciidoc/accessories/ai-camera/images/imx500-posenet.jpg
new file mode 100644
index 0000000000..0c145d748b
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-camera/images/imx500-posenet.jpg differ
diff --git a/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc b/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc
new file mode 100644
index 0000000000..ce272ee5b9
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc
@@ -0,0 +1,104 @@
+== Model deployment
+
+To deploy a new neural network model to the Raspberry Pi AI Camera, complete the following steps:
+
+. Provide a neural network model.
+. Quantise and compress the model so that it can run using the resources available on the IMX500 camera module.
+. Convert the compressed model to IMX500 format.
+. Package the model into a firmware file that can be loaded at runtime onto the camera.
+
+The first three steps will normally be performed on a more powerful computer such as a desktop or server. You must run the final packaging step on a Raspberry Pi.
+
+=== Model creation
+
+The creation of neural network models is beyond the scope of this guide. Existing models can be re-used, or new ones created using popular frameworks like TensorFlow or PyTorch.
+
+For more information, see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera[AITRIOS developer website].
+
+=== Quantisation and compression
+
+Models are quantised and compressed using Sony's Model Compression Toolkit. To install the toolkit, run the following command:
+
+[source,console]
+----
+$ pip install model_compression_toolkit
+----
+
+For more information, see the https://github.com/sony/model_optimization[Sony model optimization GitHub repository].
+
+The Model Compression Toolkit generates a quantised model in the following formats:
+
+* Keras (TensorFlow)
+* ONNX (PyTorch)
+
+=== Conversion
+
+To convert a model, first install the converter tools:
+
+[tabs]
+======
+TensorFlow::
++
+[source,console]
+----
+$ pip install imx500-converter[tf]
+----
++
+TIP: Always use the same version of TensorFlow you used to compress your model.
+
+PyTorch::
++
+[source,console]
+----
+$ pip install imx500-converter[pt]
+----
+======
+
+If you need to install both packages, use two separate Python virtual environments. This prevents TensorFlow and PyTorch from causing conflicts with one another.
+
+Next, convert the model:
+
+[tabs]
+======
+TensorFlow::
++
+[source,console]
+----
+$ imxconv-tf -i -o
+----
+
+PyTorch::
++
+[source,console]
+----
+$ imxconv-pt -i -o
+----
+======
+
+Both commands create an output folder that contains a memory usage report and a `packerOut.zip` file.
+
+For optimal use of the memory available to the accelerator on the IMX500 sensor, add `--no-input-persistency` to the above commands. However, this will disable input tensor generation and return to the application for debugging purposes.
+
+For more information on the model conversion process, see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-converter[Sony IMX500 Converter documentation].
+
+=== Packaging
+
+IMPORTANT: You must run this step on a Raspberry Pi.
+
+The final step packages the model into an RPK file. When running the neural network model, we'll upload this file to the AI Camera. Before proceeding, run the following command to install the necessary tools:
+
+[source,console]
+----
+$ sudo apt install imx500-tools
+----
+
+To package the model into an RPK file, run the following command:
+
+[source,console]
+----
+$ imx500-package -i -o
+----
+
+This command should create a file named `network.rpk` in the output folder. You'll pass the name of this file to your IMX500 camera applications.
+
+For a more comprehensive set of instructions and further specifics on the tools used, see the https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-packager[Sony IMX500 Packager documentation].
diff --git a/documentation/asciidoc/accessories/ai-hat-plus.adoc b/documentation/asciidoc/accessories/ai-hat-plus.adoc
new file mode 100644
index 0000000000..dc6a3a7cfe
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-hat-plus.adoc
@@ -0,0 +1,5 @@
+include::ai-hat-plus/about.adoc[]
+
+== Product brief
+
+For more information about the AI HAT+, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/ai-hat-plus/raspberry-pi-ai-hat-plus-product-brief.pdf[product brief].
diff --git a/documentation/asciidoc/accessories/ai-hat-plus/about.adoc b/documentation/asciidoc/accessories/ai-hat-plus/about.adoc
new file mode 100644
index 0000000000..98f1923bf5
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-hat-plus/about.adoc
@@ -0,0 +1,75 @@
+[[ai-hat-plus]]
+== About
+
+.The 26 tera-operations per second (TOPS) Raspberry Pi AI HAT+
+image::images/ai-hat-plus-hero.jpg[width="80%"]
+
+The Raspberry Pi AI HAT+ add-on board has a built-in Hailo AI accelerator compatible with
+Raspberry Pi 5. The NPU in the AI HAT+ can be used for applications including process control, security, home automation, and robotics.
+
+The AI HAT+ is available in 13 and 26 tera-operations per second (TOPS) variants, built around the Hailo-8L and Hailo-8 neural network inference accelerators. The 13 TOPS variant works best with moderate workloads, with performance similar to the xref:ai-kit.adoc[AI Kit]. The 26 TOPS variant can run larger networks, can run networks faster, and can more effectively run multiple networks simultaneously.
+
+The AI HAT+ communicates using Raspberry Pi 5’s PCIe interface. The host Raspberry Pi 5 automatically detects the on-board Hailo accelerator and uses the NPU for supported AI computing tasks. Raspberry Pi OS's built-in `rpicam-apps` camera applications automatically use the NPU to run compatible post-processing tasks.
+
+[[ai-hat-plus-installation]]
+== Install
+
+To use the AI HAT+, you will need:
+
+* a Raspberry Pi 5
+
+Each AI HAT+ comes with a ribbon cable, GPIO stacking header, and mounting hardware. Complete the following instructions to install your AI HAT+:
+
+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update:
++
+[source,console]
+----
+$ sudo apt update && sudo apt full-upgrade
+----
+
+. Next, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update
+----
++
+If you see 6 December 2023 or a later date, proceed to the next step. If you see a date earlier than 6 December 2023, run the following command to open the Raspberry Pi Configuration CLI:
++
+[source,console]
+----
+$ sudo raspi-config
+----
++
+Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key.
++
+Run the following command to update your firmware to the latest version:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update -a
+----
++
+Then, reboot with `sudo reboot`.
+
+. Disconnect the Raspberry Pi from power before beginning installation.
+
+. For the best performance, we recommend using the AI HAT+ with the Raspberry Pi Active Cooler. If you have an Active Cooler, install it before installing the AI HAT+.
++
+--
+image::images/ai-hat-plus-installation-01.png[width="60%"]
+--
+. Install the spacers using four of the provided screws. Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation does not matter as long as all pins fit into place. Disconnect the ribbon cable from the AI HAT+, and insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
++
+--
+image::images/ai-hat-plus-installation-02.png[width="60%"]
+--
+. Set the AI HAT+ on top of the spacers, and use the four remaining screws to secure it in place.
+
+. Insert the ribbon cable into the slot on the AI HAT+. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
+
+. Congratulations, you have successfully installed the AI HAT+. Connect your Raspberry Pi to power; Raspberry Pi OS will automatically detect the AI HAT+.
+
+== Get started with AI on your Raspberry Pi
+
+To start running AI accelerated applications on your Raspberry Pi, check out our xref:../computers/ai.adoc[Getting Started with the AI Kit and AI HAT+] guide.
diff --git a/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-hero.jpg b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-hero.jpg
new file mode 100644
index 0000000000..08064ca25a
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-hero.jpg differ
diff --git a/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-01.png b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-01.png
new file mode 100644
index 0000000000..33fb88280e
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-01.png differ
diff --git a/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-02.png b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-02.png
new file mode 100644
index 0000000000..b2a60016ae
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-hat-plus/images/ai-hat-plus-installation-02.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit.adoc b/documentation/asciidoc/accessories/ai-kit.adoc
new file mode 100644
index 0000000000..c5d54d1d43
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-kit.adoc
@@ -0,0 +1,6 @@
+include::ai-kit/about.adoc[]
+
+== Product brief
+
+For more information about the AI Kit, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/ai-kit/raspberry-pi-ai-kit-product-brief.pdf[product brief].
+
diff --git a/documentation/asciidoc/accessories/ai-kit/about.adoc b/documentation/asciidoc/accessories/ai-kit/about.adoc
new file mode 100644
index 0000000000..bc93a483f5
--- /dev/null
+++ b/documentation/asciidoc/accessories/ai-kit/about.adoc
@@ -0,0 +1,93 @@
+[[ai-kit]]
+== About
+
+.The Raspberry Pi AI Kit
+image::images/ai-kit.jpg[width="80%"]
+
+The Raspberry Pi AI Kit bundles the xref:m2-hat-plus.adoc#m2-hat-plus[Raspberry Pi M.2 HAT+] with a Hailo AI acceleration module for use with Raspberry Pi 5. The kit contains the following:
+
+* Hailo AI module containing a Neural Processing Unit (NPU)
+* Raspberry Pi M.2 HAT+, to connect the AI module to your Raspberry Pi 5
+* thermal pad pre-fitted between the module and the M.2 HAT+
+* mounting hardware kit
+* 16mm stacking GPIO header
+
+== AI module features
+
+* 13 tera-operations per second (TOPS) neural network inference accelerator built around the Hailo-8L chip.
+* M.2 2242 form factor
+
+[[ai-kit-installation]]
+== Install
+
+To use the AI Kit, you will need:
+
+* a Raspberry Pi 5
+
+Each AI Kit comes with a pre-installed AI module, ribbon cable, GPIO stacking header, and mounting hardware. Complete the following instructions to install your AI Kit:
+
+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update:
++
+[source,console]
+----
+$ sudo apt update && sudo apt full-upgrade
+----
+
+. Next, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update
+----
++
+If you see 6 December 2023 or a later date, proceed to the next step. If you see a date earlier than 6 December 2023, run the following command to open the Raspberry Pi Configuration CLI:
++
+[source,console]
+----
+$ sudo raspi-config
+----
++
+Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key.
++
+Run the following command to update your firmware to the latest version:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update -a
+----
++
+Then, reboot with `sudo reboot`.
+
+. Disconnect the Raspberry Pi from power before beginning installation.
+
+. For the best performance, we recommend using the AI Kit with the Raspberry Pi Active Cooler. If you have an Active Cooler, install it before installing the AI Kit.
++
+--
+image::images/ai-kit-installation-01.png[width="60%"]
+--
+. Install the spacers using four of the provided screws. Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation does not matter as long as all pins fit into place. Disconnect the ribbon cable from the AI Kit, and insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
++
+--
+image::images/ai-kit-installation-02.png[width="60%"]
+--
+. Set the AI Kit on top of the spacers, and use the four remaining screws to secure it in place.
++
+--
+image::images/ai-kit-installation-03.png[width="60%"]
+--
+. Insert the ribbon cable into the slot on the AI Kit. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
++
+--
+image::images/ai-kit-installation-04.png[width="60%"]
+--
+. Congratulations, you have successfully installed the AI Kit. Connect your Raspberry Pi to power; Raspberry Pi OS will automatically detect the AI Kit.
++
+--
+image::images/ai-kit-installation-05.png[width="60%"]
+--
+
+WARNING: Always disconnect your Raspberry Pi from power before connecting or disconnecting a device from the M.2 slot.
+
+== Get started with AI on your Raspberry Pi
+
+To start running AI accelerated applications on your Raspberry Pi, check out our xref:../computers/ai.adoc[Getting Started with the AI Kit and AI HAT+] guide.
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-01.png b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-01.png
new file mode 100644
index 0000000000..33fb88280e
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-01.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-02.png b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-02.png
new file mode 100644
index 0000000000..b2a60016ae
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-02.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-03.png b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-03.png
new file mode 100644
index 0000000000..2e821583c7
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-03.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-04.png b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-04.png
new file mode 100644
index 0000000000..7bf45e8162
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-04.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-05.png b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-05.png
new file mode 100644
index 0000000000..67b0d969a2
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit-installation-05.png differ
diff --git a/documentation/asciidoc/accessories/ai-kit/images/ai-kit.jpg b/documentation/asciidoc/accessories/ai-kit/images/ai-kit.jpg
new file mode 100644
index 0000000000..d519b0ff43
Binary files /dev/null and b/documentation/asciidoc/accessories/ai-kit/images/ai-kit.jpg differ
diff --git a/documentation/asciidoc/accessories/audio.adoc b/documentation/asciidoc/accessories/audio.adoc
index 7c4fd154b0..87e227f58f 100644
--- a/documentation/asciidoc/accessories/audio.adoc
+++ b/documentation/asciidoc/accessories/audio.adoc
@@ -1,4 +1,3 @@
-
include::audio/introduction.adoc[]
include::audio/dac_pro.adoc[]
@@ -16,4 +15,3 @@ include::audio/getting_started.adoc[]
include::audio/hardware-info.adoc[]
include::audio/update-firmware.adoc[]
-
diff --git a/documentation/asciidoc/accessories/audio/codec_zero.adoc b/documentation/asciidoc/accessories/audio/codec_zero.adoc
index 9739adf5f2..cfb9dd967b 100644
--- a/documentation/asciidoc/accessories/audio/codec_zero.adoc
+++ b/documentation/asciidoc/accessories/audio/codec_zero.adoc
@@ -22,6 +22,7 @@ The Codec Zero includes an EEPROM which can be used for auto-configuration of th
In addition to the green (GPIO23) and red (GPIO24) LEDs, a tactile programmable button (GPIO27) is also provided.
==== Pinouts
+
[cols="1,12"]
|===
| *P1/2* | Support external PHONO/RCA sockets if needed. P1: AUX IN, P2: AUX OUT.
diff --git a/documentation/asciidoc/accessories/audio/configuration.adoc b/documentation/asciidoc/accessories/audio/configuration.adoc
index 0c8f80f84b..79a5d2136e 100644
--- a/documentation/asciidoc/accessories/audio/configuration.adoc
+++ b/documentation/asciidoc/accessories/audio/configuration.adoc
@@ -6,42 +6,51 @@ image::images/gui.png[]
There are a number of third-party audio software applications available for Raspberry Pi that will support the plug-and-play feature of our audio boards. Often these are used headless. They can be controlled via a PC or Mac application, or by a web server installed on Raspberry Pi, with interaction through a webpage.
-If you need to configure Raspberry Pi OS yourself, perhaps if you're running a headless system of your own and don't have the option of control via the GUI, you will need to make your Raspberry Pi audio board the primary audio device in Raspberry Pi OS, disabling the Raspberry Pi’s on-board audio device. This is done by editing the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file. Using a Terminal session connected to your Raspberry Pi via SSH, run the following command to edit the file:
+If you need to configure Raspberry Pi OS yourself, perhaps if you're running a headless system of your own and don't have the option of control via the GUI, you will need to make your Raspberry Pi audio board the primary audio device in Raspberry Pi OS, disabling the Raspberry Pi's on-board audio device. This is done by editing the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file. Using a Terminal session connected to your Raspberry Pi via SSH, run the following command to edit the file:
+[source,console]
----
$ sudo nano /boot/firmware/config.txt
----
Find the `dtparam=audio=on` line in the file and comment it out by placing a # symbol at the start of the line. Anything written after the # symbol in any given line will be disregarded by the program. Your `/boot/firmware/config.txt` file should now contain the following entry:
+[source,ini]
----
#dtparam=audio=on
----
-Press CTRL+X, then Y and Enter to save, followed by a reboot of your Raspberry Pi in order for the settings to take effect.
+Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Finally, reboot your Raspberry Pi in order for the settings to take effect.
+[source,console]
----
$ sudo reboot
----
Alternatively, the `/boot/firmware/config.txt` file can be edited directly onto the Raspberry Pi's microSD card, inserted into your usual computer. Using the default file manager, open the `/boot/firmware/` volume on the card and edit the `config.txt` file using an appropriate text editor, then save the file, eject the microSD card and reinsert it back into your Raspberry Pi.
-=== Attaching the HAT
+=== Attach the HAT
-The Raspberry Pi audio boards attach to the Raspberry Pi’s 40-pin header. They are designed to be supported on the Raspberry Pi using the supplied circuit board standoffs and screws. No soldering is required on the Raspberry Pi audio boards for normal operation unless you are using hardwired connections for specific connectors such as XLR (External Line Return) connections on the DAC Pro.
+The Raspberry Pi audio boards attach to the Raspberry Pi's 40-pin header. They are designed to be supported on the Raspberry Pi using the supplied circuit board standoffs and screws. No soldering is required on the Raspberry Pi audio boards for normal operation unless you are using hardwired connections for specific connectors such as XLR (External Line Return) connections on the DAC Pro.
All the necessary mounting hardware including spacers, screws and connectors is provided. The PCB spacers should be screwed, finger-tight only, to the Raspberry Pi before adding the audio board. The remaining screws should then be screwed into the spacers from above.
=== Hardware versions
-There are multiple versions of the audio cards, and the version that you possess determines the actions required to configure it. Older IQaudIO-marked boards (black PCB) are electrically equivalent to the Raspberry Pi-branded boards (green PCB) but have different EEPROM contents. The following command can be used to confirm which version you have:
+There are multiple versions of the audio cards. Your specific version determines the actions required for configuration. Older, IQaudIO-branded boards have a black PCB. Newer Raspberry Pi-branded boards have a green PCB. These boards are electrically equivalent, but have different EEPROM contents.
+After attaching the HAT and applying power, check that the power LED on your audio card is illuminated, if it has one. For example, the Codec Zero has an LED marked `PWR`.
+
+After establishing the card has power, use the following command to check the version of your board:
+
+[source,console]
----
$ grep -a . /proc/device-tree/hat/*
----
If the vendor string says "Raspberry Pi Ltd." then no further action is needed (but see below for the extra Codec Zero configuration). If it says "IQaudIO Limited www.iqaudio.com" then you will need the additional config.txt settings outlined below. If it says "No such file or directory" then the HAT is not being detected, but these config.txt settings may still make it work.
+[source,ini]
----
# Some magic to prevent the normal HAT overlay from being loaded
dtoverlay=
@@ -63,7 +72,7 @@ Each input and output device has its own mixer, allowing the audio levels and vo
independently. Within the codec itself, other mixers and switches exist to allow the output to be mixed to a single mono channel for single-speaker output. Signals may also be inverted; there is a five-band equaliser to adjust certain frequency bands. These settings can be controlled interactively, using AlsaMixer, or programmatically.
Both the AUX IN and AUX OUT are 1V RMS. It may be necessary to adjust
-the AUX IN’s mixer to ensure that the input signal doesn’t saturate the ADC. Similarly, the output mixers can be to be adjusted to get the best possible output.
+the AUX IN's mixer to ensure that the input signal doesn't saturate the ADC. Similarly, the output mixers can be to be adjusted to get the best possible output.
Preconfigured scripts (loadable ALSA settings) https://github.com/raspberrypi/Pi-Codec[are available on GitHub], offering:
@@ -74,32 +83,52 @@ Preconfigured scripts (loadable ALSA settings) https://github.com/raspberrypi/Pi
The Codec Zero needs to know which of these input and output settings are being used each time the Raspberry Pi powers on. Using a Terminal session on your Raspberry Pi, run the following command to download the scripts:
+[source,console]
----
$ git clone https://github.com/raspberrypi/Pi-Codec.git
----
If git is not installed, run the following command to install it:
+[source,console]
----
$ sudo apt install git
----
The following command will set your device to use the on-board MEMS microphone and output for speaker playback:
+[source,console]
----
-$ sudo alsactl restore -f /home/pi/Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state
+$ sudo alsactl restore -f /home//Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state
----
+This command may result in erroneous messages, including the following:
+
+* "failed to import hw"
+* "No state is present for card"
+
+In most cases, these warnings are harmless; you can safely ignore them.
+
+However, the following warnings may indicate a hardware failure:
+
+* "Remote I/O error"
+
+In Linux, the following warnings indicate that the kernel can't communicate with an I2C device:
+
+* "Remote I/O error" (`REMOTEIO`)
+
In order for your project to operate with your required settings when it is powered on, edit the `/etc/rc.local` file. The contents of this file are run at the end of every boot process, so it is ideal for this purpose. Edit the file:
+[source,console]
----
$ sudo nano /etc/rc.local
----
-Add the chosen script command above the exit 0 line and then Ctrl X, Y and Enter to save. The file should now look similar to this depending on your chosen setting:
+Add the chosen script command above the exit 0 line and then *Ctrl X*, *Y* and *Enter* to save. The file should now look similar to this depending on your chosen setting:
+[source,bash]
----
-#!/bin/sh -e
+#!/bin/sh
#
# rc.local
#
@@ -112,19 +141,21 @@ Add the chosen script command above the exit 0 line and then Ctrl X, Y and Enter
#
# By default this script does nothing.
-sudo alsactl restore -f /home/pi/Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state
+sudo alsactl restore -f /home//Pi-Codec/Codec_Zero_OnboardMIC_record_and_SPK_playback.state
exit 0
----
-Ctrl X, Y and Enter to save and reboot your device for the settings to take effect:
+Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Reboot for the settings to take effect:
+[source,console]
----
$ sudo reboot
----
If you are using your Raspberry Pi and Codec Zero in a headless environment, there is one final step required to make the Codec Zero the default audio device without access to the GUI audio settings on the desktop. We need to create a small file in your home folder:
+[source,console]
----
$ sudo nano .asoundrc
----
@@ -138,13 +169,24 @@ pcm.!default {
}
----
-Ctrl X, Y and Enter to save, and reboot once more to complete the configuration:
+Press `Ctrl+X`, then the `Y` key, then *Enter* to save. Reboot once more to complete the configuration:
+
+Modern Linux distributions such as Raspberry Pi OS typically use PulseAudio or PipeWire for audio control. These frameworks are capable of mixing and switching audio from multiple sources. They provide a high-level API for audio applications to use. Many audio apps use these frameworks by default.
+
+Only create `~/.asoundrc` if an audio application needs to:
+
+* communicate directly with ALSA
+* run in an environment where PulseAudio or PipeWire are not present
+
+This file can interfere with the UI's view of underlying audio resources. As a result, we do not recommend creating `~/.asoundrc` when running the Raspberry Pi OS desktop.
+The UI may automatically clean up and remove this file if it exists.
+[source,console]
----
$ sudo reboot
----
-=== Muting and unmuting the DigiAMP{plus}
+=== Mute and unmute the DigiAMP{plus}
The DigiAMP{plus} mute state is toggled by GPIO22 on Raspberry Pi. The latest audio device tree
supports the unmute of the DigiAMP{plus} through additional parameters.
@@ -153,12 +195,14 @@ Firstly a "one-shot" unmute when kernel module loads.
For Raspberry Pi boards:
+[source,ini]
----
dtoverlay=rpi-digiampplus,unmute_amp
----
For IQaudIO boards:
+[source,ini]
----
dtoverlay=iqaudio-digiampplus,unmute_amp
----
@@ -169,12 +213,14 @@ window will cancel mute.)
For Raspberry Pi boards:
+[source,ini]
----
dtoverlay=rpi-digiampplus,auto_mute_amp
----
For IQaudIO boards:
+[source,ini]
----
dtoverlay=iqaudio-digiampplus,auto_mute_amp
----
@@ -184,14 +230,16 @@ solution.
The amp will start up muted. To unmute the amp:
+[source,console]
----
$ sudo sh -c "echo 22 > /sys/class/gpio/export"
$ sudo sh -c "echo out >/sys/class/gpio/gpio22/direction"
$ sudo sh -c "echo 1 >/sys/class/gpio/gpio22/value"
----
-to mute the amp once more:
+To mute the amp once more:
+[source,console]
----
$ sudo sh -c "echo 0 >/sys/class/gpio/gpio22/value"
----
diff --git a/documentation/asciidoc/accessories/audio/dac_plus.adoc b/documentation/asciidoc/accessories/audio/dac_plus.adoc
index 1d4324f10b..dbef84b71e 100644
--- a/documentation/asciidoc/accessories/audio/dac_plus.adoc
+++ b/documentation/asciidoc/accessories/audio/dac_plus.adoc
@@ -7,6 +7,7 @@ image::images/DAC+_Board_Diagram.jpg[width="80%"]
A Texas Instruments PCM5122 is used in the DAC{plus} to deliver analogue audio to the phono connectors of the device. It also supports a dedicated headphone amplifier and is powered via the Raspberry Pi through the GPIO header.
==== Pinouts
+
[cols="1,12"]
|===
| *P1* | Analogue out (0-2V RMS), carries GPIO27, MUTE signal (headphone detect), left and right
diff --git a/documentation/asciidoc/accessories/audio/dac_pro.adoc b/documentation/asciidoc/accessories/audio/dac_pro.adoc
index de360f4438..2e8c444a5b 100644
--- a/documentation/asciidoc/accessories/audio/dac_pro.adoc
+++ b/documentation/asciidoc/accessories/audio/dac_pro.adoc
@@ -11,6 +11,7 @@ dedicated headphone amplifier. The DAC Pro is powered by a Raspberry Pi through
As part of the DAC Pro, two three-pin headers (P7/P9) are exposed above the Raspberry Pi's USB and Ethernet ports for use by the optional XLR board, allowing differential/balanced output.
==== Pinouts
+
[cols="1,12"]
|===
| *P1* | Analogue out (0-2V RMS), carries GPIO27, MUTE signal (headphone detect), left and right
@@ -22,8 +23,8 @@ audio and left and right ground.
==== Optional XLR Board
-The Pi-DAC PRO exposes a 6 pin header used by the optional XLR board to provide Differential / Balanced output exposed by XLR sockets above the Pi’s USB/Ethernet ports.
+The Pi-DAC PRO exposes a 6 pin header used by the optional XLR board to provide Differential / Balanced output exposed by XLR sockets above the Pi's USB/Ethernet ports.
image::images/optional_xlr_board.jpg[width="80%"]
-An XLR connector is used in Studio and some hi-end hifi systems. It can also be used to drive ACTIVE “monitor” speakers as used at discos or on stage.
+An XLR connector is used in Studio and some hi-end hifi systems. It can also be used to drive ACTIVE "monitor" speakers as used at discos or on stage.
diff --git a/documentation/asciidoc/accessories/audio/digiamp_plus.adoc b/documentation/asciidoc/accessories/audio/digiamp_plus.adoc
index a2d816e9f5..51347778ec 100644
--- a/documentation/asciidoc/accessories/audio/digiamp_plus.adoc
+++ b/documentation/asciidoc/accessories/audio/digiamp_plus.adoc
@@ -6,7 +6,7 @@ DigiAMP{plus} uses the Texas Instruments TAS5756M PowerDAC and must be powered f
image::images/DigiAMP+_Board_Diagram.jpg[width="80%"]
-DigiAMP{plus}’s power in barrel connector is 5.5mm x 2.5mm.
+DigiAMP{plus}'s power in barrel connector is 5.5mm × 2.5mm.
At power-on, the amplifier is muted by default (the mute LED is illuminated). Software is responsible for the mute state and LED control (Raspberry Pi GPIO22).
diff --git a/documentation/asciidoc/accessories/audio/getting_started.adoc b/documentation/asciidoc/accessories/audio/getting_started.adoc
index 114b7c5eb3..7efbd7f9a3 100644
--- a/documentation/asciidoc/accessories/audio/getting_started.adoc
+++ b/documentation/asciidoc/accessories/audio/getting_started.adoc
@@ -1,6 +1,6 @@
== Getting started
-=== Creating a toy chatter box
+=== Create a toy chatter box
As an example of what Raspberry Pi Audio Boards can do, let's walk through the creation of a toy chatter box. Its on-board microphone, programmable button and speaker driver make the Codec Zero an ideal choice for this application.
@@ -16,22 +16,24 @@ image::images/Chatterbox_Labels.png[width="80%"]
Use a small flat-head screwdriver to attach your speaker to the screw terminals. For the additional push button, solder the button wires directly to the Codec Zero pads as indicated, using GPIO pin 27 and Ground for the switch, and +3.3V and Ground for the LED, if necessary.
-=== Setting up your Raspberry Pi
+=== Set up your Raspberry Pi
In this example, we are using Raspberry Pi OS Lite. Refer to our guide on xref:../computers/getting-started.adoc#installing-the-operating-system[installing Raspberry Pi OS] for more details.
Make sure that you update your operating system before proceeding and follow the instructions provided for Codec Zero configuration, including the commands to enable the on-board microphone and speaker output.
-=== Programming your Raspberry Pi
+=== Program your Raspberry Pi
Open a shell — for instance by connecting via SSH — on your Raspberry Pi and run the following to create our Python script:
+[source,console]
----
$ sudo nano chatter_box.py
----
-Adding the following to the file:
+Add the following to the file, replacing `` with your username:
+[source,python]
----
#!/usr/bin/env python3
from gpiozero import Button
@@ -48,18 +50,18 @@ print(f"{date}")
# Make sure that the 'sounds' folder exists, and if it does not, create it
-path = '/home/pi/sounds'
+path = '/home//sounds'
isExist = os.path.exists(path)
if not isExist:
os.makedirs(path)
print("The new directory is created!")
- os.system('chmod 777 -R /home/pi/sounds')
+ os.system('chmod 777 -R /home//sounds')
# Download a 'burp' sound if it does not already exist
-burp = '/home/pi/burp.wav'
+burp = '/home//burp.wav'
isExist = os.path.exists(burp)
if not isExist:
@@ -81,18 +83,18 @@ def released():
print("Released at %s after %.2f seconds" % (release_time, pressed_for))
if pressed_for < button.hold_time:
print("This is a short press")
- randomfile = random.choice(os.listdir("/home/pi/sounds/"))
- file = '/home/pi/sounds/' + randomfile
+ randomfile = random.choice(os.listdir("/home//sounds/"))
+ file = '/home//sounds/' + randomfile
os.system('aplay ' + file)
elif pressed_for > 20:
os.system('aplay ' + burp)
print("Erasing all recorded sounds")
- os.system('rm /home/pi/sounds/*');
+ os.system('rm /home//sounds/*');
def held():
print("This is a long press")
os.system('aplay ' + burp)
- os.system('arecord --format S16_LE --duration=5 --rate 48000 -c2 /home/pi/sounds/$(date +"%d_%m_%Y-%H_%M_%S")_voice.m4a');
+ os.system('arecord --format S16_LE --duration=5 --rate 48000 -c2 /home//sounds/$(date +"%d_%m_%Y-%H_%M_%S")_voice.m4a');
button.when_pressed = pressed
button.when_released = released
@@ -102,31 +104,33 @@ pause()
----
-Ctrl X, Y and Enter to save. To make the script executable, type the following:
+Press `Ctrl+X`, then the `Y` key, then *Enter* to save. To make the script executable, type the following:
+[source,console]
----
$ sudo chmod +x chatter_box.py
----
-Enter the following to create a crontab daemon that will automatically start the script each time the device is powered on:
+Next, we need to create a crontab daemon that will automatically start the script each time the device is powered on. Run the following command to open your crontab for editing:
+[source,console]
----
$ crontab -e
----
-You will be asked to select an editor; we recommend you use `nano`. Select it by entering the corresponding number, and press Enter to continue. The following line should be added to the bottom of the file:
+You will be asked to select an editor; we recommend you use `nano`. Select it by entering the corresponding number, and press Enter to continue. The following line should be added to the bottom of the file, replacing `` with your username:
----
-@reboot python /home/pi/chatter_box.py
+@reboot python /home//chatter_box.py
----
-Ctrl X, Y and Enter to save, then reboot your device.
+Press *Ctrl X*, then *Y*, then *Enter* to save, then reboot your device with `sudo reboot`.
-=== Operating your device
+=== Use the toy chatter box
The final step is to ensure that everything is operating as expected. Press the button and release it when you hear the burp. The recording will now begin for a period of five seconds. Once you have released the button, press it briefly again to hear the recording. Repeat this process as many times as you wish, and your sounds will be played at random. You can delete all recordings by pressing and holding the button, keeping the button pressed during the first burp and recording process, and releasing it after at least 20 seconds, at which point you will hear another burp sound confirming that the recordings have been deleted.
-video::BjXERzu8nS0[youtube]
+video::BjXERzu8nS0[youtube,width=80%,height=400px]
=== Next steps
diff --git a/documentation/asciidoc/accessories/audio/hardware-info.adoc b/documentation/asciidoc/accessories/audio/hardware-info.adoc
index 96766033ee..c7d445d64b 100644
--- a/documentation/asciidoc/accessories/audio/hardware-info.adoc
+++ b/documentation/asciidoc/accessories/audio/hardware-info.adoc
@@ -52,27 +52,30 @@ image::images/pin_out_new.jpg[width="80%"]
To store the AlsaMixer settings, add the following at the command line:
+[source,console]
----
$ sudo alsactl store
----
You can save the current state to a file, then reload that state at startup.
-To save:
+To save, run the following command, replacing `` with your username:
+[source,console]
----
-$ sudo alsactl store -f /home/pi/usecase.state
+$ sudo alsactl store -f /home//usecase.state
----
-To restore a saved file:
+To restore a saved file, run the following command, replacing `` with your username:
+[source,console]
----
-$ sudo alsactl restore -f /home/pi/usecase.state
+$ sudo alsactl restore -f /home//usecase.state
----
=== MPD-based audio with volume control
-To allow Music Player Daemon (MPD)-based audio software to control the audio board’s built in volume, the file
+To allow Music Player Daemon (MPD)-based audio software to control the audio board's built in volume, the file
`/etc/mpd.conf` may need to be changed to support the correct AlsaMixer name.
This can be achieved by ensuring the 'Audio output' section of `/etc/mpd.conf` has the 'mixer_control'
diff --git a/documentation/asciidoc/accessories/audio/images/Chatter_Box.jpg b/documentation/asciidoc/accessories/audio/images/Chatter_Box.jpg
index 7d7bfb0e01..b09c695215 100644
Binary files a/documentation/asciidoc/accessories/audio/images/Chatter_Box.jpg and b/documentation/asciidoc/accessories/audio/images/Chatter_Box.jpg differ
diff --git a/documentation/asciidoc/accessories/audio/images/Chatterbox_Labels.png b/documentation/asciidoc/accessories/audio/images/Chatterbox_Labels.png
index 7f54c5b97b..379df111f8 100644
Binary files a/documentation/asciidoc/accessories/audio/images/Chatterbox_Labels.png and b/documentation/asciidoc/accessories/audio/images/Chatterbox_Labels.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/Codec_Zero_Board_Diagram.png b/documentation/asciidoc/accessories/audio/images/Codec_Zero_Board_Diagram.png
index 441453078a..4e02bdedb7 100644
Binary files a/documentation/asciidoc/accessories/audio/images/Codec_Zero_Board_Diagram.png and b/documentation/asciidoc/accessories/audio/images/Codec_Zero_Board_Diagram.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/DAC+_Board_Diagram.png b/documentation/asciidoc/accessories/audio/images/DAC+_Board_Diagram.png
index afa6ed1d61..7a68f02c4b 100644
Binary files a/documentation/asciidoc/accessories/audio/images/DAC+_Board_Diagram.png and b/documentation/asciidoc/accessories/audio/images/DAC+_Board_Diagram.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/DAC_Pro_Board_Diagram.png b/documentation/asciidoc/accessories/audio/images/DAC_Pro_Board_Diagram.png
index 9cab3ed314..033ed5e1bc 100644
Binary files a/documentation/asciidoc/accessories/audio/images/DAC_Pro_Board_Diagram.png and b/documentation/asciidoc/accessories/audio/images/DAC_Pro_Board_Diagram.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/DigiAMP+_Board_Diagram.png b/documentation/asciidoc/accessories/audio/images/DigiAMP+_Board_Diagram.png
index 7c6411100b..e4f2b336b7 100644
Binary files a/documentation/asciidoc/accessories/audio/images/DigiAMP+_Board_Diagram.png and b/documentation/asciidoc/accessories/audio/images/DigiAMP+_Board_Diagram.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/dac_plus.png b/documentation/asciidoc/accessories/audio/images/dac_plus.png
index 6c3ad64553..61154d683f 100644
Binary files a/documentation/asciidoc/accessories/audio/images/dac_plus.png and b/documentation/asciidoc/accessories/audio/images/dac_plus.png differ
diff --git a/documentation/asciidoc/accessories/audio/images/optional_xlr_board.jpg b/documentation/asciidoc/accessories/audio/images/optional_xlr_board.jpg
index 7e6e85d4e7..526b1c2d5f 100644
Binary files a/documentation/asciidoc/accessories/audio/images/optional_xlr_board.jpg and b/documentation/asciidoc/accessories/audio/images/optional_xlr_board.jpg differ
diff --git a/documentation/asciidoc/accessories/audio/images/wiring.jpg b/documentation/asciidoc/accessories/audio/images/wiring.jpg
index 5481ce90ce..3a22c834b9 100644
Binary files a/documentation/asciidoc/accessories/audio/images/wiring.jpg and b/documentation/asciidoc/accessories/audio/images/wiring.jpg differ
diff --git a/documentation/asciidoc/accessories/audio/images/write_protect_tabs.jpg b/documentation/asciidoc/accessories/audio/images/write_protect_tabs.jpg
index 5da7d07235..91e2f65f1b 100644
Binary files a/documentation/asciidoc/accessories/audio/images/write_protect_tabs.jpg and b/documentation/asciidoc/accessories/audio/images/write_protect_tabs.jpg differ
diff --git a/documentation/asciidoc/accessories/audio/introduction.adoc b/documentation/asciidoc/accessories/audio/introduction.adoc
index 935b100875..01abb46903 100644
--- a/documentation/asciidoc/accessories/audio/introduction.adoc
+++ b/documentation/asciidoc/accessories/audio/introduction.adoc
@@ -5,6 +5,7 @@ Raspberry Pi Audio Boards bring high quality audio to your existing hi-fi or Ras
Each board has a specific purpose and set of features. The highest audio quality playback is available from our DAC PRO, DAC{plus} and DigiAMP{plus} boards, which support up to full HD audio (192kHz); while the Codec Zero supports up to HD audio (96kHz) and includes a built-in microphone, making it ideal for compact projects.
=== Features at a glance
+
[cols="2,1,1,1,1,1,1,1,1,1"]
|===
| | *Line out* | *Balanced out* | *Stereo speakers* | *Mono speaker* | *Headphones* | *Aux in* | *Aux out* | *Ext mic* | *Built-in mic*
diff --git a/documentation/asciidoc/accessories/audio/update-firmware.adoc b/documentation/asciidoc/accessories/audio/update-firmware.adoc
index 93acd9c415..d5a16fdb9e 100644
--- a/documentation/asciidoc/accessories/audio/update-firmware.adoc
+++ b/documentation/asciidoc/accessories/audio/update-firmware.adoc
@@ -2,7 +2,7 @@
Raspberry Pi Audio Boards use an EEPROM that contains information that is used by the host Raspberry Pi device to select the appropriate driver at boot time. This information is programmed into the EEPROM during manufacture. There are some circumstances where the end user may wish to update the EEPROM contents: this can be done from the command line.
-IMPORTANT: Before proceeding, you should update the Raspberry Pi OS running on your Raspberry Pi to the latest version.
+IMPORTANT: Before proceeding, update the version of Raspberry Pi OS running on your Raspberry Pi to the latest version.
=== The EEPROM write-protect link
@@ -12,29 +12,30 @@ image::images/write_protect_tabs.jpg[width="80%"]
NOTE: In some cases the two pads may already have a 0Ω resistor fitted to bridge the write-protect link, as illustrated in the picture of the Codec Zero board above.
-=== EEPROM Programming
+=== Program the EEPROM
Once the write-protect line has been pulled down, the EEPROM can be programmed.
You should first install the utilities and then run the programmer. Open up a terminal window and type the following:
+[source,console]
----
$ sudo apt update
$ sudo apt install rpi-audio-utils
$ sudo rpi-audio-flash
----
-After starting you will be presented with a warning screen.
+After starting, you will see a warning screen.
image::images/firmware-update/warning.png[]
-Selecting "Yes" to proceed will present you with a menu allowing you to select your hardware.
+Select "Yes" to proceed. You should see a menu where you can select your hardware.
image::images/firmware-update/select.png[]
NOTE: If no HAT is present, or if the connected HAT is not a Raspberry Pi Audio board, you will be presented with an error screen. If the firmware has already been updated on the board, a message will be displayed informing you that you do not have to continue.
-After selecting the correct hardware a screen will display while the new firmware is flashed to the HAT.
+After selecting the hardware, a screen will display while the new firmware is flashed to the HAT.
image::images/firmware-update/flashing.png[]
@@ -42,5 +43,5 @@ Afterwards a screen will display telling you that the new firmware has installed
image::images/firmware-update/flashed.png[]
-NOTE: If the firmware fails to install correctly, an error screen will be displayed. In the first instance you should remove and reseat the HAT board and try flashing the firmware again.
+NOTE: If the firmware fails to install correctly, you will see an error screen. Try removing and reseating the HAT, then flash the firmware again.
diff --git a/documentation/asciidoc/accessories/build-hat.adoc b/documentation/asciidoc/accessories/build-hat.adoc
index fcfc20065c..472c939c47 100644
--- a/documentation/asciidoc/accessories/build-hat.adoc
+++ b/documentation/asciidoc/accessories/build-hat.adoc
@@ -29,4 +29,3 @@ include::build-hat/links-to-other.adoc[]
include::build-hat/compat.adoc[]
include::build-hat/mech.adoc[]
-
diff --git a/documentation/asciidoc/accessories/build-hat/images/blinking-light.gif b/documentation/asciidoc/accessories/build-hat/images/blinking-light.gif
deleted file mode 100644
index 4019125030..0000000000
Binary files a/documentation/asciidoc/accessories/build-hat/images/blinking-light.gif and /dev/null differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/blinking-light.webm b/documentation/asciidoc/accessories/build-hat/images/blinking-light.webm
new file mode 100644
index 0000000000..12ecb8a3bb
Binary files /dev/null and b/documentation/asciidoc/accessories/build-hat/images/blinking-light.webm differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/connect-motor.gif b/documentation/asciidoc/accessories/build-hat/images/connect-motor.gif
deleted file mode 100644
index 197a87cc85..0000000000
Binary files a/documentation/asciidoc/accessories/build-hat/images/connect-motor.gif and /dev/null differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/connect-motor.webm b/documentation/asciidoc/accessories/build-hat/images/connect-motor.webm
new file mode 100644
index 0000000000..70da881290
Binary files /dev/null and b/documentation/asciidoc/accessories/build-hat/images/connect-motor.webm differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.gif b/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.gif
deleted file mode 100644
index f1cac0bf37..0000000000
Binary files a/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.gif and /dev/null differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.webm b/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.webm
new file mode 100644
index 0000000000..8d64b6817b
Binary files /dev/null and b/documentation/asciidoc/accessories/build-hat/images/fitting-build-hat.webm differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.gif b/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.gif
deleted file mode 100644
index e065f39b21..0000000000
Binary files a/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.gif and /dev/null differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.webm b/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.webm
new file mode 100644
index 0000000000..e358683f90
Binary files /dev/null and b/documentation/asciidoc/accessories/build-hat/images/powering-build-hat.webm differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/tall-headers.png b/documentation/asciidoc/accessories/build-hat/images/tall-headers.png
index 58eff73528..cf89aa68ef 100644
Binary files a/documentation/asciidoc/accessories/build-hat/images/tall-headers.png and b/documentation/asciidoc/accessories/build-hat/images/tall-headers.png differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/turning-motor.gif b/documentation/asciidoc/accessories/build-hat/images/turning-motor.gif
deleted file mode 100644
index 71b7b0c060..0000000000
Binary files a/documentation/asciidoc/accessories/build-hat/images/turning-motor.gif and /dev/null differ
diff --git a/documentation/asciidoc/accessories/build-hat/images/turning-motor.webm b/documentation/asciidoc/accessories/build-hat/images/turning-motor.webm
new file mode 100644
index 0000000000..334b43eae7
Binary files /dev/null and b/documentation/asciidoc/accessories/build-hat/images/turning-motor.webm differ
diff --git a/documentation/asciidoc/accessories/build-hat/introduction.adoc b/documentation/asciidoc/accessories/build-hat/introduction.adoc
index ca59abdf12..3ee1e7fd8b 100644
--- a/documentation/asciidoc/accessories/build-hat/introduction.adoc
+++ b/documentation/asciidoc/accessories/build-hat/introduction.adoc
@@ -1,4 +1,5 @@
-== Introducing the Build HAT
+[[about-build-hat]]
+== About
The https://raspberrypi.com/products/build-hat[Raspberry Pi Build HAT] is an add-on board that connects to the 40-pin GPIO header of your Raspberry Pi, which was designed in collaboration with LEGO® Education to make it easy to control LEGO® Technic™ motors and sensors with Raspberry Pi computers.
@@ -8,7 +9,7 @@ NOTE: A full list of supported devices can be found in the xref:build-hat.adoc#d
It provides four connectors for LEGO® Technic™ motors and sensors from the SPIKE™ Portfolio. The available sensors include a distance sensor, a colour sensor, and a versatile force sensor. The angular motors come in a range of sizes and include integrated encoders that can be queried to find their position.
-The Build HAT fits all Raspberry Pi computers with a 40-pin GPIO header, including — with the addition of a ribbon cable or other extension device — Raspberry Pi 400. Connected LEGO® Technic™ devices can easily be controlled in Python, alongside standard Raspberry Pi accessories such as a camera module.
+The Build HAT fits all Raspberry Pi computers with a 40-pin GPIO header, including, with the addition of a ribbon cable or other extension device, Keyboard-series devices. Connected LEGO® Technic™ devices can easily be controlled in Python, alongside standard Raspberry Pi accessories such as a camera module.
The Raspberry Pi Build HAT power supply (PSU), which is https://raspberrypi.com/products/build-hat-power-supply[available separately], is designed to power both the Build HAT and Raspberry Pi computer along with all connected LEGO® Technic™ devices.
@@ -16,15 +17,15 @@ image::images/psu.jpg[width="80%"]
The LEGO® Education SPIKE™ Prime Set 45678 and SPIKE™ Prime Expansion Set 45681, available separately from LEGO® Education resellers, include a collection of useful elements supported by the Build HAT.
-NOTE: The HAT works with all 40-pin GPIO Raspberry Pi boards, including Raspberry Pi 4 and Raspberry Pi Zero. With the addition of a ribbon cable or other extension device, it can also be used with Raspberry Pi 400.
+NOTE: The HAT works with all 40-pin GPIO Raspberry Pi boards, including Zero-series devices. With the addition of a ribbon cable or other extension device, it can also be used with Keyboard-series devices.
* Controls up to 4 LEGO® Technic™ motors and sensors included in the SPIKE™ Portfolio
* Easy-to-use https://buildhat.readthedocs.io/[Python library] to control your LEGO® Technic™ devices
* Fits onto any Raspberry Pi computer with a 40-pin GPIO header
-* Onboard xref:../microcontrollers/rp2040.adoc[RP2040] microcontroller manages low-level control of LEGO® Technic™ devices
+* Onboard xref:../microcontrollers/silicon.adoc[RP2040] microcontroller manages low-level control of LEGO® Technic™ devices
* External 8V PSU https://raspberrypi.com/products/build-hat-power-supply[available separately] to power both Build HAT and Raspberry Pi
[NOTE]
====
-The Build HAT can not power the Raspberry Pi 400 as it does not support being powered via the GPIO headers.
+The Build HAT cannot power Keyboard-series devices, since they do not support power supply over the GPIO headers.
====
diff --git a/documentation/asciidoc/accessories/build-hat/net-brick.adoc b/documentation/asciidoc/accessories/build-hat/net-brick.adoc
index 32a14a8c57..f5f42ad8c3 100644
--- a/documentation/asciidoc/accessories/build-hat/net-brick.adoc
+++ b/documentation/asciidoc/accessories/build-hat/net-brick.adoc
@@ -1,17 +1,17 @@
-=== Using the Build HAT from .NET
+=== Use the Build HAT from .NET
The Raspberry Pi Built HAT is referred to "Brick" in LEGO® parlance and you can talk directly to it from .NET using the https://datasheets.raspberrypi.com/build-hat/build-hat-serial-protocol.pdf[Build HAT Serial Protocol].
You can create a `brick` object as below,
-[csharp]
+[source,csharp]
----
Brick brick = new("/dev/serial0");
----
but you need to remember to dispose of the `brick` at the end of your code.
-[csharp]
+[source,csharp]
----
brick.Dispose();
----
@@ -20,18 +20,18 @@ WARNING: If you do not call `brick.Dispose()` your program will not terminate.
If you want to avoid calling `brick.Dispose` at the end, then create your brick with the `using` statement:
-[csharp]
+[source,csharp]
----
using Brick brick = new("/dev/serial0");
----
In this case, when reaching the end of the program, your brick will be automatically disposed.
-==== Displaying the information
+==== Display Build HAT information
You can gather the various software versions, the signature, and the input voltage:
-[csharp]
+[source,csharp]
----
var info = brick.BuildHatInformation;
Console.WriteLine($"version: {info.Version}, firmware date: {info.FirmwareDate}, signature:");
@@ -45,7 +45,7 @@ NOTE: The input voltage is read only once at boot time and is not read again aft
The functions `GetSensorType`, `GetSensor` will allow you to retrieve any information on connected sensor.
-[csharp]
+[source,csharp]
----
SensorType sensor = brick.GetSensorType((SensorPort)i);
Console.Write($"Port: {i} {(Brick.IsMotor(sensor) ? "Sensor" : "Motor")} type: {sensor} Connected: ");
@@ -53,7 +53,7 @@ Console.Write($"Port: {i} {(Brick.IsMotor(sensor) ? "Sensor" : "Motor")} type: {
In this example, you can as well use the `IsMotor` static function to check if the connected element is a sensor or a motor.
-[csharp]
+[source,csharp]
----
if (Brick.IsActiveSensor(sensor))
{
@@ -72,9 +72,9 @@ else
Most sensors implements events on their special properties. You can simply subscribe to `PropertyChanged` and `PropertyUpdated`. The changed one will be fired when the value is changing while the updated one when there is a success update to the property. Depending on the modes used, some properties may be updated in the background all the time while some others occasionally.
-You may be interested only when a color is changing or the position of the motor is changing, using it as a tachometer. In this case, the `PropertyChanged` is what you need!
+You may be interested only when a colour is changing or the position of the motor is changing, using it as a tachometer. In this case, the `PropertyChanged` is what you need!
-[csharp]
+[source,csharp]
----
Console.WriteLine("Move motor on Port A to more than position 100 to stop this test.");
brick.WaitForSensorToConnect(SensorPort.PortA);
@@ -102,11 +102,11 @@ void MotorPropertyEvent(object? sender, PropertyChangedEventArgs e)
}
----
-==== Waiting for initialization
+==== Wait for initialization
The brick can take a long time before it initializes. A wait for a sensor to be connected has been implemented.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortB);
----
diff --git a/documentation/asciidoc/accessories/build-hat/net-installing-software.adoc b/documentation/asciidoc/accessories/build-hat/net-installing-software.adoc
index f0bfe12b34..0c9330e0b4 100644
--- a/documentation/asciidoc/accessories/build-hat/net-installing-software.adoc
+++ b/documentation/asciidoc/accessories/build-hat/net-installing-software.adoc
@@ -1,43 +1,43 @@
-== Using the Build HAT from .NET
+== Use the Build HAT from .NET
-=== Installing the .NET Framework
+=== Install the .NET Framework
The .NET framework from Microsoft is not available via `apt` on Raspberry Pi. However, you can follow the https://docs.microsoft.com/en-us/dotnet/iot/deployment[official instructions] from Microsoft to install the .NET framework. Alternatively, there is a simplified https://www.petecodes.co.uk/install-and-use-microsoft-dot-net-5-with-the-raspberry-pi/[third party route] to get the .NET toolchain on to your Raspberry Pi.
WARNING: The installation script is run as `root`. You should read it first and make sure you understand what it is doing. If you are at all unsure you should follow the https://docs.microsoft.com/en-us/dotnet/iot/deployment[official instructions] manually.
-[.bash]
+[source,console]
----
$ wget -O - https://raw.githubusercontent.com/pjgpetecodes/dotnet5pi/master/install.sh | sudo bash
----
After installing the .NET framework you can create your project:
-[.bash]
+[source,console]
----
$ dotnet new console --name buildhat
----
This creates a default program in the `buildhat` subdirectory, and we need to be in that directory in order to continue:
-[.bash]
+[source,console]
----
$ cd buildhat
----
You will now need to install the following nuget packages:
-[.bash]
+
+[source,console]
----
$ dotnet add package System.Device.Gpio --version 2.1.0
$ dotnet add package Iot.Device.Bindings --version 2.1.0
----
-=== Running C# Code
+=== Run C# Code
-You can run the program with the `dotnet run` command. Let's try it now to make sure everything works.
-It should print "Hello World!"
+You can run the program with the `dotnet run` command. Let's try it now to make sure everything works. It should print "Hello World!"
-[.bash]
+[source,console]
----
$ dotnet run
Hello World!
@@ -45,7 +45,8 @@ Hello World!
(When instructed to "run the program" in the instructions that follow, you will simply rerun `dotnet run`)
-=== Editing C# Code
+=== Edit C# Code
+
In the instructions below, you will be editing the file `buildhat/Program.cs`, the C# program which was generated when you ran the above commands.
Any text editor will work to edit C# code, including Geany, the IDE/Text Editor that comes pre-installed. https://code.visualstudio.com/docs/setup/raspberry-pi/[Visual Studio Code] (often called "VS Code") is also a popular alternative.
diff --git a/documentation/asciidoc/accessories/build-hat/net-motors.adoc b/documentation/asciidoc/accessories/build-hat/net-motors.adoc
index 3945ff203c..9e9d9ab543 100644
--- a/documentation/asciidoc/accessories/build-hat/net-motors.adoc
+++ b/documentation/asciidoc/accessories/build-hat/net-motors.adoc
@@ -1,10 +1,10 @@
-=== Using Motors from .NET
+=== Use Motors from .NET
There are two types of motors, the *passive* ones and the *active* ones. Active motors will provide detailed position, absolute position and speed while passive motors can only be controlled with speed.
A common set of functions to control the speed of the motors are available. There are 2 important ones: `SetPowerLimit` and `SetBias`:
-[csharp]
+[source,csharp]
----
train.SetPowerLimit(1.0);
train.SetBias(0.2);
@@ -25,7 +25,7 @@ The typical passive motor is a train and older Powered Up motors. The `Speed` pr
Functions to control `Start`, `Stop` and `SetSpeed` are also available. Here is an example of how to use it:
-[csharp]
+[source,csharp]
----
Console.WriteLine("This will run the motor for 20 secondes incrementing the PWM");
train.SetPowerLimit(1.0);
@@ -60,7 +60,7 @@ Active motors have `Speed`, `AbsolutePosition`, `Position` and `TargetSpeed` as
The code snippet shows how to get the motors, start them and read the properties:
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
brick.WaitForSensorToConnect(SensorPort.PortD);
@@ -86,11 +86,11 @@ active.Stop();
active2.Stop();
----
-NOTE: You should not forget to start and stop your motors when needed.
+NOTE: Don't forget to start and stop your motors when needed.
Advance features are available for active motors. You can request to move for seconds, to a specific position, a specific absolute position. Here are couple of examples:
-[csharp]
+[source,csharp]
----
// From the previous example, this will turn the motors back to their initial position:
active.TargetSpeed = 100;
@@ -103,7 +103,7 @@ active2.MoveToPosition(0, true);
Each function allow you to block or not the thread for the time the operation will be performed. Note that for absolute and relative position moves, there is a tolerance of few degrees.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var active = (ActiveMotor)brick.GetMotor(SensorPort.PortA);
diff --git a/documentation/asciidoc/accessories/build-hat/net-sensors.adoc b/documentation/asciidoc/accessories/build-hat/net-sensors.adoc
index c8d6d72e86..d6e6284f4e 100644
--- a/documentation/asciidoc/accessories/build-hat/net-sensors.adoc
+++ b/documentation/asciidoc/accessories/build-hat/net-sensors.adoc
@@ -1,12 +1,12 @@
-=== Using Sensors from .NET
+=== Use Sensors from .NET
-Like for motors, you have active and passive sensors. Most recent sensors are active. The passive one are lights and simple buttons. Active ones are distance or color sensors, as well as small 3x3 pixel displays.
+Like for motors, you have active and passive sensors. Most recent sensors are active. The passive one are lights and simple buttons. Active ones are distance or colour sensors, as well as small 3×3 pixel displays.
==== Button/Touch Passive Sensor
The button/touch passive sensor have one specific property `IsPressed`. The property is set to true when the button is pressed. Here is a complete example with events:
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var button = (ButtonSensor)brick.GetSensor(SensorPort.PortA);
@@ -39,7 +39,7 @@ image::images/passive-light.png[Passive light, width="60%"]
The passive light are the train lights. They can be switched on and you can controlled their brightness.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var light = (PassiveLight)brick.GetSensor(SensorPort.PortA);
@@ -55,9 +55,9 @@ light.Off()
==== Active Sensor
-The active sensor class is a generic one that all the active sensor heritate including active motors. They contains a set of properties regarding how they are connected to the Build HAT, the modes, the detailed combi modes, the hardware, software versions and a specific property called `ValueAsString`. The value as string contains the last measurement as a collection of strings. A measurement arrives like `P0C0: +23 -42 0`, the enumeration will contains `P0C0:`, `+23`, `-42` and `0`. This is made so if you are using advance modes and managing yourself the combi modes and commands, you'll be able to get the measurements.
+The active sensor class is a generic one that all the active sensor inherit including active motors. They contains a set of properties regarding how they are connected to the Build HAT, the modes, the detailed Combi modes, the hardware, software versions and a specific property called `ValueAsString`. The value as string contains the last measurement as a collection of strings. A measurement arrives like `P0C0: +23 -42 0`, the enumeration will contains `P0C0:`, `+23`, `-42` and `0`. This is made so if you are using advance modes and managing yourself the Combi modes and commands, you'll be able to get the measurements.
-All active sensor can run a specific measurement mode or a combi mode. You can setup one through the advance mode using the `SelectModeAndRead` and `SelectCombiModesAndRead` functions with the specific mode(s) you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode.
+All active sensor can run a specific measurement mode or a Combi mode. You can setup one through the advance mode using the `SelectModeAndRead` and `SelectCombiModesAndRead` functions with the specific mode(s) you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode.
The modes that can be combined in the Combi mode are listed in the `CombiModes` property. Al the properties of the sensors will be updated automatically when you'll setup one of those modes.
@@ -70,7 +70,7 @@ WeDo Tilt Sensor has a special `Tilt` property. The type is a point with X is th
You can set a continuous measurement for this sensor using the `ContinuousMeasurement` property.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var tilt = (WeDoTiltSensor)brick.GetSensor(SensorPort.PortA);
@@ -89,9 +89,9 @@ while(!console.KeyAvailable)
.WeDo Distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?S=45304-1&name=WeDo%202.0%20Motion%20Sensor&category=%5BEducational%20&%20Dacta%5D%5BWeDo%5D#T=S&O={%22iconly%22:0}[Image from Bricklink]
image::images/wedo-distance.png[WeDo Distance sensor, width="60%"]
-WeDo Distance Sensor gives you a distance in millimeters with the Distance property.
+WeDo Distance Sensor gives you a distance in millimetres with the Distance property.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var distance = (WeDoDistanceSensor)brick.GetSensor(SensorPort.PortA);
@@ -110,7 +110,7 @@ image::images/spike-force.png[spike force sensor, width="60%"]
This force sensor measure the pressure applies on it and if it is pressed. The two properties can be access through `Force` and `IsPressed` properties.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var force = (ForceSensor)brick.GetSensor(SensorPort.PortA);
@@ -122,14 +122,14 @@ while(!force.IsPressed)
}
----
-==== SPIKE Essential 3x3 Color Light Matrix
+==== SPIKE Essential 3×3 Colour Light Matrix
-.spike 3x3 matrix, https://www.bricklink.com/v2/catalog/catalogitem.page?P=45608c01&name=Electric,%203%20x%203%20Color%20Light%20Matrix%20-%20SPIKE%20Prime&category=%5BElectric%5D#T=C[Image from Bricklink]
-image::images/3x3matrix.png[spike 3x3 matrix, width="60%"]
+.spike 3×3 matrix, https://www.bricklink.com/v2/catalog/catalogitem.page?P=45608c01&name=Electric,%203%20x%203%20Color%20Light%20Matrix%20-%20SPIKE%20Prime&category=%5BElectric%5D#T=C[Image from Bricklink]
+image::images/3x3matrix.png[spike 3×3 matrix, width="60%"]
-This is a small 3x3 display with 9 different leds that can be controlled individually. The class exposes functions to be able to control the screen. Here is an example using them:
+This is a small 3×3 display with 9 different LEDs that can be controlled individually. The class exposes functions to be able to control the screen. Here is an example using them:
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var matrix = (ColorLightMatrix)brick.GetSensor(SensorPort.PortA);
@@ -154,23 +154,23 @@ Span col = stackalloc LedColor[9] { LedColor.White, LedColor.White, Le
matrix.DisplayColorPerPixel(brg, col);
----
-==== SPIKE Prime Color Sensor and Color and Distance Sensor
+==== SPIKE Prime Colour Sensor and Colour and Distance Sensor
-SPIKE color sensor:
+SPIKE colour sensor:
-.spike color sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37308c01&name=Electric%20Sensor,%20Color%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink]
+.spike colour sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37308c01&name=Electric%20Sensor,%20Color%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink]
image::images/spike-color.png[spike color sensor, width="60%"]
-Color and distance sensor:
+Colour and distance sensor:
.Color distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=bb0891c01&name=Electric%20Sensor,%20Color%20and%20Distance%20-%20Boost&category=%5BElectric%5D#T=C&C=1[Image from Bricklink]
-image::images/color-distance.png[Color distance sensor, width="60%"]
+image::images/color-distance.png[Colour distance sensor, width="60%"]
-Those color sensor has multiple properties and functions. You can get the `Color`, the `ReflectedLight` and the `AmbiantLight`.
+Those colour sensor has multiple properties and functions. You can get the `Color`, the `ReflectedLight` and the `AmbiantLight`.
-On top of this, the Color and Distance sensor can measure the `Distance` and has an object `Counter`. It will count automatically the number of objects which will go in and out of the range. This does allow to count objects passing in front of the sensor. The distance is limited from 0 to 10 centimeters.
+On top of this, the Colour and Distance sensor can measure the `Distance` and has an object `Counter`. It will count automatically the number of objects which will go in and out of the range. This does allow to count objects passing in front of the sensor. The distance is limited from 0 to 10 centimetres.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortC);
@@ -191,16 +191,16 @@ while (!Console.KeyAvailable)
}
----
-NOTE: For better measurement, it is not recommended to change the measurement mode in a very fast way, the color integration may not be done in a proper way. This example gives you the full spectrum of what you can do with the sensor. Also, this class do not implement a continuous measurement mode. You can setup one through the advance mode using the `SelectModeAndRead` function with the specific mode you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode.
+NOTE: For better measurement, it is not recommended to change the measurement mode in a very fast way, the colour integration may not be done in a proper way. This example gives you the full spectrum of what you can do with the sensor. Also, this class do not implement a continuous measurement mode. You can setup one through the advance mode using the `SelectModeAndRead` function with the specific mode you'd like to continuously have. It is important to understand that changing the mode or setting up a new mode will stop the previous mode.
==== SPIKE Prime Ultrasonic Distance Sensor
-.spike distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37316c01&name=Electric%20Sensor,%20Distance%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink]
-image::images/spike-distance.png[spike distance sensor, width="60%"]
+.Spike distance sensor, https://www.bricklink.com/v2/catalog/catalogitem.page?P=37316c01&name=Electric%20Sensor,%20Distance%20-%20Spike%20Prime&category=%5BElectric%5D#T=C&C=11[Image from Bricklink]
+image::images/spike-distance.png[Spike distance sensor, width="60%"]
-This is a distance sensor and it does implement a `Distance` property that will give the distance in millimeter. A `ContinuousMeasurement` mode is also available on this one.
+This is a distance sensor and it does implement a `Distance` property that will give the distance in millimetre. A `ContinuousMeasurement` mode is also available on this one.
-[csharp]
+[source,csharp]
----
brick.WaitForSensorToConnect(SensorPort.PortA);
var distance = (UltrasonicDistanceSensor)brick.GetSensor(SensorPort.PortA);
diff --git a/documentation/asciidoc/accessories/build-hat/preparing-build-hat.adoc b/documentation/asciidoc/accessories/build-hat/preparing-build-hat.adoc
index f535b5971e..0e19d8bdac 100644
--- a/documentation/asciidoc/accessories/build-hat/preparing-build-hat.adoc
+++ b/documentation/asciidoc/accessories/build-hat/preparing-build-hat.adoc
@@ -1,10 +1,10 @@
-== Preparing your Build HAT
+== Prepare your Build HAT
NOTE: Before starting to work with your Raspberry Pi Build HAT you should xref:../computers/getting-started.adoc#setting-up-your-raspberry-pi[set up] your Raspberry Pi, xref:../computers/getting-started.adoc#installing-the-operating-system[install] the latest version of the operating system using https://www.raspberrypi.com/downloads/[Raspberry Pi Imager].
Attach 9mm spacers to the bottom of the board. Seat the Raspberry Pi Build HAT onto your Raspberry Pi. Make sure you put it on the right way up. Unlike other HATs, all the components are on the bottom, leaving room for a breadboard or LEGO® elements on top.
-image::images/fitting-build-hat.gif[width="80%"]
+video::images/fitting-build-hat.webm[width="80%"]
=== Access the GPIO Pins
@@ -27,21 +27,21 @@ The following pins are used by the Build HAT itself and you should not connect a
|===
-=== Setting up your Raspberry Pi
+=== Set up your Raspberry Pi
-Once the Raspberry Pi has booted, open the Raspberry Pi Configuration tool by clicking on the Raspberry Menu button and then selecting “Preferences” and then “Raspberry Pi Configuration”.
+Once the Raspberry Pi has booted, open the Raspberry Pi Configuration tool by clicking on the Raspberry Menu button and then selecting "Preferences" and then "Raspberry Pi Configuration".
-Click on the “interfaces” tab and adjust the Serial settings as shown below:
+Click on the "interfaces" tab and adjust the Serial settings as shown below:
image::images/setting-up.png[width="50%"]
-==== Using your Raspberry Pi Headless
+==== Use your Raspberry Pi headless
-If you are running your Raspberry Pi headless and using `raspi-config`, select “Interface Options” from the first menu.
+If you are running your Raspberry Pi headless and using `raspi-config`, select "Interface Options" from the first menu.
image::images/raspi-config-1.png[width="70%"]
-Then “P6 Serial Port”.
+Then "P6 Serial Port".
image::images/raspi-config-2.png[width="70%"]
@@ -59,20 +59,20 @@ image::images/raspi-config-5.png[width="70%"]
You will need to reboot at this point if you have made any changes.
-=== Powering the Build HAT
+=== Power the Build HAT
-Connect an external power supply — the https://raspberrypi.com/products/build-hat-power-supply[official Raspberry Pi Build HAT power supply] is recommended — however any reliable +8V±10% power supply capable of supplying 48W via a DC 5521 centre positive barrel connector (5.5mm × 2.1mm × 11mm) will power the Build HAT. You don’t need to connect an additional USB power supply to the Raspberry Pi as well, unless you are using a Raspberry Pi 400.
+Connect an external power supply — the https://raspberrypi.com/products/build-hat-power-supply[official Raspberry Pi Build HAT power supply] is recommended — however any reliable +8V±10% power supply capable of supplying 48W via a DC 5521 centre positive barrel connector (5.5mm × 2.1mm × 11mm) will power the Build HAT. You don't need to connect an additional USB power supply to the Raspberry Pi unless you are using a Keyboard-series device.
[NOTE]
====
-The Build HAT can not power the Raspberry Pi 400 as it does not support being powered via the GPIO headers.
+The Build HAT cannot power Keyboard-series devices, since they do not support power supply over the GPIO headers.
====
-image::images/powering-build-hat.gif[width="80%"]
+video::images/powering-build-hat.webm[width="80%"]
[NOTE]
====
-The LEGO® Technic™ motors are very powerful; so to drive them you’ll need an external 8V power supply. If you want to read from motor encoders and the SPIKE™ force sensor, you can power your Raspberry Pi and Build HAT the usual way, via your Raspberry Pi’s USB power socket. The SPIKE™ colour and distance sensors, like the motors, require an https://raspberrypi.com/products/build-hat-power-supply[external power supply].
+The LEGO® Technic™ motors are very powerful; so to drive them you'll need an external 8V power supply. If you want to read from motor encoders and the SPIKE™ force sensor, you can power your Raspberry Pi and Build HAT the usual way, via your Raspberry Pi's USB power socket. The SPIKE™ colour and distance sensors, like the motors, require an https://raspberrypi.com/products/build-hat-power-supply[external power supply].
====
You have the choice to use Build HAT with Python or .NET.
diff --git a/documentation/asciidoc/accessories/build-hat/py-installing-software.adoc b/documentation/asciidoc/accessories/build-hat/py-installing-software.adoc
index 34c012a17d..b9a93f8be5 100644
--- a/documentation/asciidoc/accessories/build-hat/py-installing-software.adoc
+++ b/documentation/asciidoc/accessories/build-hat/py-installing-software.adoc
@@ -1,12 +1,19 @@
-== Using the Build HAT from Python
+== Use the Build HAT from Python
-=== Installing the Python Library
+=== Install the Build HAT Python Library
-Install the Build HAT Python library. Open a Terminal window and type,
+To install the Build HAT Python library, open a terminal window and run the following command:
-[source]
+[source,console]
----
$ sudo apt install python3-build-hat
----
+Raspberry Pi OS versions prior to _Bookworm_ do not have access to the library with `apt`. Instead, run the following command to install the library using `pip`:
+
+[source,console]
+----
+$ sudo pip3 install buildhat
+----
+
For more information about the Build HAT Python Library see https://buildhat.readthedocs.io/[ReadTheDocs].
diff --git a/documentation/asciidoc/accessories/build-hat/py-motors.adoc b/documentation/asciidoc/accessories/build-hat/py-motors.adoc
index 7c2a1ab3b5..7cf498f67b 100644
--- a/documentation/asciidoc/accessories/build-hat/py-motors.adoc
+++ b/documentation/asciidoc/accessories/build-hat/py-motors.adoc
@@ -1,19 +1,19 @@
-=== Using Motors from Python
+=== Use Motors from Python
There are xref:build-hat.adoc#device-compatibility[a number of motors] that work with the Build HAT.
-==== Connecting a Motor
+==== Connect a Motor
-Connect a motor to port A on the Build HAT. The LPF2 connectors need to be inserted the correct way up. If the connector doesn’t slide in easily, rotate by 180 degrees and try again.
+Connect a motor to port A on the Build HAT. The LPF2 connectors need to be inserted the correct way up. If the connector doesn't slide in easily, rotate by 180 degrees and try again.
-image::images/connect-motor.gif[width="80%"]
+video::images/connect-motor.webm[width="80%"]
-==== Working with Motors
+==== Work with Motors
Start the https://thonny.org/[Thonny IDE]. Add the program code below:
-[source,python,linenums]
+[source,python]
----
from buildhat import Motor
@@ -22,24 +22,24 @@ motor_a = Motor('A')
motor_a.run_for_seconds(5)
----
-Run the program by clicking the play/run button. If this is the first time you’re running a Build HAT program since the Raspberry Pi has booted, there will be a few seconds pause while the firmware is copied across to the board. You should see the red LED extinguish and the green LED illuminate. Subsequent executions of a Python program will not require this pause.
+Run the program by clicking the play/run button. If this is the first time you're running a Build HAT program since the Raspberry Pi has booted, there will be a few seconds pause while the firmware is copied across to the board. You should see the red LED extinguish and the green LED illuminate. Subsequent executions of a Python program will not require this pause.
-image::images/blinking-light.gif[width="80%"]
+video::images/blinking-light.webm[width="80%"]
Your motor should turn clockwise for 5 seconds.
-image::images/turning-motor.gif[width="80%"]
+video::images/turning-motor.webm[width="80%"]
Change the final line of your program and re-run.
-[source,python,linenums, start=5]
+[source,python]
----
motor_a.run_for_seconds(5, speed=50)
----
The motor should now turn faster. Make another change:
-[source,python,linenums, start=5]
+[source,python]
----
motor_a.run_for_seconds(5, speed=-50)
----
diff --git a/documentation/asciidoc/accessories/build-hat/py-sensors.adoc b/documentation/asciidoc/accessories/build-hat/py-sensors.adoc
index 889a251ce4..15571eae8e 100644
--- a/documentation/asciidoc/accessories/build-hat/py-sensors.adoc
+++ b/documentation/asciidoc/accessories/build-hat/py-sensors.adoc
@@ -1,16 +1,16 @@
-=== Using Sensors from Python
+=== Use Sensors from Python
There is a xref:build-hat.adoc#device-compatibility[large range of sensors] that work with the Build HAT.
-==== Working with Sensors
+==== Work with Sensors
Connect a Colour sensor to port B on the Build HAT, and a Force sensor to port C.
-NOTE: If you’re not intending to drive a motor, then you don’t need an external power supply and you can use a standard USB power supply for your Raspberry Pi.
+NOTE: If you're not intending to drive a motor, then you don't need an external power supply and you can use a standard USB power supply for your Raspberry Pi.
Create another new program:
-[source,python,linenums]
+[source,python]
----
from signal import pause
from buildhat import ForceSensor, ColorSensor
@@ -30,4 +30,4 @@ button.when_released = handle_released
pause()
----
-Run it and hold a coloured object (LEGO® elements are ideal) in front of the colour sensor and press the Force sensor plunger. The sensor’s LED should switch on and the name of the closest colour should be displayed in the thonny REPL.
+Run it and hold a coloured object (LEGO® elements are ideal) in front of the colour sensor and press the Force sensor plunger. The sensor's LED should switch on and the name of the closest colour should be displayed in the Thonny REPL.
diff --git a/documentation/asciidoc/accessories/bumper.adoc b/documentation/asciidoc/accessories/bumper.adoc
new file mode 100644
index 0000000000..01e8de0fbe
--- /dev/null
+++ b/documentation/asciidoc/accessories/bumper.adoc
@@ -0,0 +1 @@
+include::bumper/about.adoc[]
diff --git a/documentation/asciidoc/accessories/bumper/about.adoc b/documentation/asciidoc/accessories/bumper/about.adoc
new file mode 100644
index 0000000000..ee9f120523
--- /dev/null
+++ b/documentation/asciidoc/accessories/bumper/about.adoc
@@ -0,0 +1,31 @@
+== About
+
+.The Raspberry Pi Bumper for Raspberry Pi 5
+image::images/bumper.jpg[width="80%"]
+
+The Raspberry Pi Bumper for Raspberry Pi 5 is a snap-on silicone cover that protects
+the bottom and edges of the board. When attached, the mounting holes of the Raspberry Pi remain accessible through the bumper.
+
+The Bumper is only compatible with Raspberry Pi 5.
+
+== Assembly instructions
+
+.Assembling the bumper
+image::images/assembly.png[width="80%"]
+
+To attach the Raspberry Pi Bumper to your Raspberry Pi:
+
+. Turn off your Raspberry Pi and disconnect the power cable.
+. Remove the SD card from the SD card slot of your Raspberry Pi.
+. Align the bumper with the board.
+. Press the board gently but firmly into the bumper, taking care to avoid contact between the bumper and any of the board’s components.
+. Insert your SD card back into the SD card slot of your Raspberry Pi.
+. Reconnect your Raspberry Pi to power.
+
+To remove the Raspberry Pi Bumper from your Raspberry Pi:
+
+. Turn off your Raspberry Pi and disconnect the power cable.
+. Remove the SD card from the SD card slot of your Raspberry Pi.
+. Gently but firmly peel the bumper away from the board, taking care to avoid contact between the bumper and any of the board’s components.
+. Insert your SD card back into the SD card slot of your Raspberry Pi.
+. Reconnect your Raspberry Pi to power.
diff --git a/documentation/asciidoc/accessories/bumper/images/assembly.png b/documentation/asciidoc/accessories/bumper/images/assembly.png
new file mode 100644
index 0000000000..bdcfb03289
Binary files /dev/null and b/documentation/asciidoc/accessories/bumper/images/assembly.png differ
diff --git a/documentation/asciidoc/accessories/bumper/images/bumper.jpg b/documentation/asciidoc/accessories/bumper/images/bumper.jpg
new file mode 100644
index 0000000000..14682676a2
Binary files /dev/null and b/documentation/asciidoc/accessories/bumper/images/bumper.jpg differ
diff --git a/documentation/asciidoc/accessories/camera.adoc b/documentation/asciidoc/accessories/camera.adoc
index facfc68da9..f5076f9fa0 100644
--- a/documentation/asciidoc/accessories/camera.adoc
+++ b/documentation/asciidoc/accessories/camera.adoc
@@ -6,4 +6,4 @@ include::camera/lens.adoc[]
include::camera/synchronous_cameras.adoc[]
-include::camera/external_trigger.adoc[]
\ No newline at end of file
+include::camera/external_trigger.adoc[]
diff --git a/documentation/asciidoc/accessories/camera/camera_hardware.adoc b/documentation/asciidoc/accessories/camera/camera_hardware.adoc
index 6ef75c4c82..3b8dafbd56 100644
--- a/documentation/asciidoc/accessories/camera/camera_hardware.adoc
+++ b/documentation/asciidoc/accessories/camera/camera_hardware.adoc
@@ -11,55 +11,62 @@ image::images/cm3.jpg[Camera Module 3 normal and wide angle]
.Camera Module 3 NoIR (left) and Camera Module 3 NoIR Wide (right)
image::images/cm3_noir.jpg[Camera Module 3 NoIR normal and wide angle]
-
-Additionally a 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] with CS- or M12-mount variants for use with external lenses was https://www.raspberrypi.com/news/new-product-raspberry-pi-high-quality-camera-on-sale-now-at-50/[released in 2020] and https://www.raspberrypi.com/news/new-autofocus-camera-modules/[2023] respectively. There is no infrared version of the HQ Camera, however the xref:camera.adoc#filter-removal[IR Filter can be removed] if required.
+Additionally, a 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] with CS- or M12-mount variants for use with external lenses was https://www.raspberrypi.com/news/new-product-raspberry-pi-high-quality-camera-on-sale-now-at-50/[released in 2020] and https://www.raspberrypi.com/news/new-autofocus-camera-modules/[2023] respectively. There is no infrared version of the HQ Camera, however the xref:camera.adoc#filter-removal[IR Filter can be removed] if required.
.HQ Camera, M12-mount (left) and C/CS-mount (right)
image::images/hq.jpg[M12- and C/CS-mount versions of the HQ Camera]
-Finally, there is the Global Shutter camera, which was http://raspberrypi.com/news/new-raspberry-pi-global-shutter-camera[released in 2023]. There is no infrared version of the GS Camera, however the IR Filter can be removed if required.
+The Raspberry Pi AI Camera uses the Sony IMX500 imaging sensor to provide low-latency and high-performance AI capabilities to any camera application. Tight integration with xref:../computers/camera_software.adoc[Raspberry Pi's camera software stack] allows users to deploy their own neural network models with minimal effort.
+
+image::images/ai-camera-hero.png[The Raspberry Pi AI Camera]
+
+Finally, there is the Global Shutter camera, which was http://raspberrypi.com/news/new-raspberry-pi-global-shutter-camera[released in 2023]. There is no infrared version of the GS Camera, however the xref:camera.adoc#filter-removal[IR Filter can be removed] if required.
.Global Shutter Camera
image::images/gs-camera.jpg[GS Camera]
-NOTE: Raspberry Pi Camera Modules are compatible with all Raspberry Pi computers with CSI connectors - that is, all models except Raspberry Pi 400 and the 2016 launch version of Zero.
+NOTE: Raspberry Pi Camera Modules are compatible with all Raspberry Pi computers with CSI connectors.
=== Rolling or Global shutter?
-Most digital cameras — and our Camera Modules — use a rolling shutter: they scan the image they’re capturing line-by-line, then output the results. You may have noticed that this can cause distortion effects in some settings; if you’ve ever photographed rotating propeller blades, you’ve probably spotted the image shimmering rather than looking like an object that is rotating. The propeller blades have had enough time to change position in the tiny moment that the camera has taken to swipe across and observe the scene.
+Most digital cameras, including our Camera Modules, use a **rolling shutter**: they scan the image they're capturing line-by-line, then output the results. You may have noticed that this can cause distortion effects in some settings; if you've ever photographed rotating propeller blades, you've probably spotted the image shimmering rather than looking like an object that is rotating. The propeller blades have had enough time to change position in the tiny moment that the camera has taken to swipe across and observe the scene.
-A global shutter — and our Global Shutter Camera Module — doesn’t do this. It captures the light from every pixel in the scene at once, so your photograph of propeller blades will not suffer from the same distortion.
+A **global shutter**, like the one on our Global Shutter Camera Module, doesn't do this. It captures the light from every pixel in the scene at once, so your photograph of propeller blades will not suffer from the same distortion.
Why is this useful? Fast-moving objects, like those propeller blades, are now easy to capture; we can also synchronise several cameras to take a photo at precisely the same moment in time. There are plenty of benefits here, like minimising distortion when capturing stereo images. (The human brain is confused if any movement that appears in the left eye has not appeared in the right eye yet.) The Raspberry Pi Global Shutter Camera can also operate with shorter exposure times - down to 30µs, given enough light - than a rolling shutter camera, which makes it useful for high-speed photography.
-NOTE: The Global Shutter Camera’s image sensor has a 6.3mm diagonal active sensing area, which is similar in size to Raspberry Pi’s HQ Camera. However, the pixels are larger and can collect more light. Large pixel size and low pixel count are valuable in machine-vision applications; the more pixels a sensor produces, the harder it is to process the image in real time. To get around this, many applications downsize and crop images. This is unnecessary with the Global Shutter Camera and the appropriate lens magnification, where the lower resolution and large pixel size mean an image can be captured natively.
+NOTE: The Global Shutter Camera's image sensor has a 6.3mm diagonal active sensing area, which is similar in size to Raspberry Pi's HQ Camera. However, the pixels are larger and can collect more light. Large pixel size and low pixel count are valuable in machine-vision applications; the more pixels a sensor produces, the harder it is to process the image in real time. To get around this, many applications downsize and crop images. This is unnecessary with the Global Shutter Camera and the appropriate lens magnification, where the lower resolution and large pixel size mean an image can be captured natively.
-== Installing a Raspberry Pi camera
+== Install a Raspberry Pi camera
WARNING: Cameras are sensitive to static. Earth yourself prior to handling the PCB. A sink tap or similar should suffice if you don't have an earthing strap.
-=== Connecting the Camera
+=== Connect the Camera
+
+Before connecting any Camera, shut down your Raspberry Pi and disconnect it from power.
-The flex cable inserts into the connector labelled CAMERA on the Raspberry Pi, which is located between the Ethernet and HDMI ports. The cable must be inserted with the silver contacts facing the HDMI port. To open the connector, pull the tabs on the top of the connector upwards, then towards the Ethernet port. The flex cable should be inserted firmly into the connector, with care taken not to bend the flex at too acute an angle. To close the connector, push the top part of the connector towards the HDMI port and down, while holding the flex cable in place.
+The flex cable inserts into the connector labelled CAMERA on the Raspberry Pi, which is located between the Ethernet and HDMI ports. The cable must be inserted with the silver contacts facing the HDMI port. To open the connector, pull the tabs on the top of the connector upwards, then towards the Ethernet port. The flex cable should be inserted firmly into the connector, with care taken not to bend the flex at too acute an angle. To close the connector, push the top part of the connector down and away from the Ethernet port while holding the flex cable in place.
-We have created a video to illustrate the process of connecting the camera. Although the video shows the original camera on the original Raspberry Pi 1, the principle is the same for all camera boards:
+The following video shows how to connect the original camera on the original Raspberry Pi 1:
-video::GImeVqHQzsE[youtube]
+video::GImeVqHQzsE[youtube,width=80%,height=400px]
-Depending on the model, the camera may come with a small piece of translucent blue plastic film covering the lens. This is only present to protect the lens while it is being mailed to you, and needs to be removed by gently peeling it off.
+All Raspberry Pi boards with a camera connector use the same installation method, though the Raspberry Pi 5 and all Raspberry Pi Zero models require a https://www.raspberrypi.com/products/camera-cable/[different camera cable].
+
+Some cameras may come with a small piece of translucent blue plastic film covering the lens. This is only present to protect the lens during shipping. To remove it, gently peel it off.
NOTE: There is additional documentation available around fitting the recommended https://datasheets.raspberrypi.com/hq-camera/cs-mount-lens-guide.pdf[6mm] and https://datasheets.raspberrypi.com/hq-camera/c-mount-lens-guide.pdf[16mm] lens to the HQ Camera.
-=== Preparing the Software
+=== Prepare the Software
-Before proceeding, we recommend ensuring that your kernel, GPU firmware and applications are all up to date. Please follow the instructions on xref:../computers/os.adoc#using-apt[keeping your operating system up to date].
+Before proceeding, we recommend ensuring that your kernel, GPU firmware and applications are all up to date. Please follow the instructions on xref:../computers/os.adoc#update-software[keeping your operating system up to date].
-Then, please follow the relevant setup instructions for the xref:../computers/camera_software.adoc#getting-started[libcamera] software stack, and the https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 Python library].
+Then, please follow the relevant setup instructions for xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps`], and the https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 Python library].
== Hardware Specification
|===
-| | Camera Module v1 | Camera Module v2 | Camera Module 3 | Camera Module 3 Wide | HQ Camera | GS Camera
+| | Camera Module v1 | Camera Module v2 | Camera Module 3 | Camera Module 3 Wide | HQ Camera | AI Camera | GS Camera
| Net price
| $25
@@ -67,6 +74,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| $25
| $35
| $50
+| $70
| $50
| Size
@@ -74,8 +82,9 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| Around 25 × 24 × 9 mm
| Around 25 × 24 × 11.5 mm
| Around 25 × 24 × 12.4 mm
-| 38 x 38 x 18.4mm (excluding lens)
-| 38 x 38 x 19.8mm (29.5mm with adaptor and dust cap)
+| 38 × 38 × 18.4mm (excluding lens)
+| 25 × 24 × 11.9mm
+| 38 × 38 × 19.8mm (29.5mm with adaptor and dust cap)
| Weight
| 3g
@@ -83,15 +92,17 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 4g
| 4g
| 30.4g
+| 6g
| 34g (41g with adaptor and dust cap)
| Still resolution
-| 5 Megapixels
-| 8 Megapixels
-| 11.9 Megapixels
-| 11.9 Megapixels
-| 12.3 Megapixels
-| 1.58 Megapixels
+| 5 megapixels
+| 8 megapixels
+| 11.9 megapixels
+| 11.9 megapixels
+| 12.3 megapixels
+| 12.3 megapixels
+| 1.58 megapixels
| Video modes
| 1080p30, 720p60 and 640 × 480p60/90
@@ -99,7 +110,8 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 2304 × 1296p56, 2304 × 1296p30 HDR, 1536 × 864p120
| 2304 × 1296p56, 2304 × 1296p30 HDR, 1536 × 864p120
| 2028 × 1080p50, 2028 × 1520p40 and 1332 × 990p120
-| 1456 x 1088p60
+| 2028 × 1520p30, 4056 × 3040p10
+| 1456 × 1088p60
| Sensor
| OmniVision OV5647
@@ -107,31 +119,35 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| Sony IMX708
| Sony IMX708
| Sony IMX477
+| Sony IMX500
| Sony IMX296
| Sensor resolution
| 2592 × 1944 pixels
| 3280 × 2464 pixels
-| 4608 x 2592 pixels
-| 4608 x 2592 pixels
-| 4056 x 3040 pixels
-| 1456 x 1088 pixels
+| 4608 × 2592 pixels
+| 4608 × 2592 pixels
+| 4056 × 3040 pixels
+| 4056 × 3040 pixels
+| 1456 × 1088 pixels
| Sensor image area
| 3.76 × 2.74 mm
-| 3.68 x 2.76 mm (4.6 mm diagonal)
-| 6.45 x 3.63mm (7.4mm diagonal)
-| 6.45 x 3.63mm (7.4mm diagonal)
-| 6.287mm x 4.712 mm (7.9mm diagonal)
+| 3.68 × 2.76 mm (4.6 mm diagonal)
+| 6.45 × 3.63mm (7.4mm diagonal)
+| 6.45 × 3.63mm (7.4mm diagonal)
+| 6.287mm × 4.712 mm (7.9mm diagonal)
+| 6.287mm × 4.712 mm (7.9mm diagonal)
| 6.3mm diagonal
| Pixel size
| 1.4 µm × 1.4 µm
-| 1.12 µm x 1.12 µm
-| 1.4 µm x 1.4 µm
-| 1.4 µm x 1.4 µm
-| 1.55 µm x 1.55 µm
-| 3.45 µm x 3.45 µm
+| 1.12 µm × 1.12 µm
+| 1.4 µm × 1.4 µm
+| 1.4 µm × 1.4 µm
+| 1.55 µm × 1.55 µm
+| 1.55 µm × 1.55 µm
+| 3.45 µm × 3.45 µm
| Optical size
| 1/4"
@@ -139,6 +155,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 1/2.43"
| 1/2.43"
| 1/2.3"
+| 1/2.3"
| 1/2.9"
| Focus
@@ -148,6 +165,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| Motorized
| Adjustable
| Adjustable
+| Adjustable
| Depth of field
| Approx 1 m to ∞
@@ -155,6 +173,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| Approx 10 cm to ∞
| Approx 5 cm to ∞
| N/A
+| Approx 20 cm to ∞
| N/A
| Focal length
@@ -163,6 +182,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 4.74 mm
| 2.75 mmm
| Depends on lens
+| 4.74 mm
| Depends on lens
| Horizontal Field of View (FoV)
@@ -171,6 +191,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 66 degrees
| 102 degrees
| Depends on lens
+| 66 ±3 degrees
| Depends on lens
| Vertical Field of View (FoV)
@@ -179,6 +200,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| 41 degrees
| 67 degrees
| Depends on lens
+| 52.3 ±3 degrees
| Depends on lens
| Focal ratio (F-Stop)
@@ -187,14 +209,16 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| F1.8
| F2.2
| Depends on lens
+| F1.79
| Depends on lens
-| Maximum exposure times (seconds)
-| 0.97
+| Maximum exposure time (seconds)
+| 3.28
| 11.76
| 112
| 112
| 670.74
+| 112
| 15.5
| Lens Mount
@@ -203,6 +227,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| N/A
| N/A
| C/CS- or M12-mount
+| N/A
| C/CS
| NoIR version available?
@@ -212,6 +237,7 @@ Then, please follow the relevant setup instructions for the xref:../computers/ca
| Yes
| No
| No
+| No
|===
NOTE: There is https://github.com/raspberrypi/libcamera/issues/43[some evidence] to suggest that the Camera Module 3 may emit RFI at a harmonic of the CSI clock rate. This RFI is in a range to interfere with GPS L1 frequencies (1575 MHz). Please see the https://github.com/raspberrypi/libcamera/issues/43[thread on Github] for details and proposed workarounds.
@@ -223,6 +249,7 @@ Available mechanical drawings;
* Camera Module 2 https://datasheets.raspberrypi.com/camera/camera-module-2-mechanical-drawing.pdf[PDF]
* Camera Module 3 https://datasheets.raspberrypi.com/camera/camera-module-3-standard-mechanical-drawing.pdf[PDF]
* Camera Module 3 Wide https://datasheets.raspberrypi.com/camera/camera-module-3-wide-mechanical-drawing.pdf[PDF]
+* Camera Module 3 https://datasheets.raspberrypi.com/camera/camera-module-3-step.zip[STEP files]
* HQ Camera Module (CS-mount version) https://datasheets.raspberrypi.com/hq-camera/hq-camera-cs-mechanical-drawing.pdf[PDF]
** The CS-mount https://datasheets.raspberrypi.com/hq-camera/hq-camera-cs-lensmount-drawing.pdf[PDF]
* HQ Camera Module (M12-mount version) https://datasheets.raspberrypi.com/hq-camera/hq-camera-m12-mechanical-drawing.pdf[PDF]
diff --git a/documentation/asciidoc/accessories/camera/external_trigger.adoc b/documentation/asciidoc/accessories/camera/external_trigger.adoc
index 9078780027..642412d54d 100644
--- a/documentation/asciidoc/accessories/camera/external_trigger.adoc
+++ b/documentation/asciidoc/accessories/camera/external_trigger.adoc
@@ -21,20 +21,9 @@ We can use a Raspberry Pi Pico to provide the trigger. Connect any Pico GPIO pin
image::images/pico_wiring.jpg[alt="Image showing Raspberry Pi Pico wiring",width="50%"]
-==== Boot up the Raspberry Pi with the camera connected.
+==== Raspberry Pi Pico MicroPython Code
-Enable external triggering through superuser mode:
-
-[,bash]
-----
-sudo su
-echo 1 > /sys/module/imx296/parameters/trigger_mode
-exit
-----
-
-==== Raspberry Pi Pico Micropython Code
-
-[,python]
+[source,python]
----
from machine import Pin, PWM
@@ -51,20 +40,41 @@ pwm.freq(framerate)
pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535))
----
-The low pulsewidth is equal to the shutter time, and the frequency of the PWM equals the framerate.
+The low pulse width is equal to the shutter time, and the frequency of the PWM equals the framerate.
+
+NOTE: In this example, Pin 28 connects to the XTR touchpoint on the GS camera board.
+
+=== Camera driver configuration
+
+This step is only necessary if you have more than one camera with XTR wired in parallel.
+
+Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`.
+
+Append this line:
+[source]
+----
+dtoverlay=imx296,always-on
+----
+When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If both cameras are on the same Raspberry Pi you will need two dtoverlay lines, only one of them ending with `,cam0`.
+
+If the external trigger will not be started right away, you also need to increase the libcamera timeout xref:camera.adoc#libcamera-configuration[as above].
+
+=== Starting the camera
-NOTE: In this example Pin 28 is used to connect to the XTR touchpoint on the GS camera board.
+Enable external triggering:
-=== Operation
+[source,console]
+----
+$ echo 1 | sudo tee /sys/module/imx296/parameters/trigger_mode
+----
-Run the code on the Pico, and set the camera running:
+Run the code on the Pico, then set the camera running:
-[,bash]
+[source,console]
----
-rpicam-hello -t 0 --qt-preview --shutter 3000
+$ rpicam-hello -t 0 --qt-preview --shutter 3000
----
-A frame should now be generated every time that the Pico pulses the pin. Variable framerate is acceptable, and can be controlled by simply
-varying the duration between pulses. No options need to be passed to rpicam-apps to enable external trigger.
+Every time the Pico pulses the pin, it should capture a frame. However, if `--gain` and `--awbgains` are not set, some frames will be dropped to allow AGC and AWB algorithms to settle.
-NOTE: When running libcamera apps, you will need to specify a fixed shutter duration (the value does not matter). This will ensure the AGC does not try adjusting camera's shutter speed, which is controlled by the external trigger pulse.
\ No newline at end of file
+NOTE: When running `rpicam-apps`, always specify a fixed shutter duration, to ensure the AGC does not try to adjust the camera's shutter speed. The value is not important, since it is actually controlled by the external trigger pulse.
diff --git a/documentation/asciidoc/accessories/camera/filters.adoc b/documentation/asciidoc/accessories/camera/filters.adoc
index 676532f8f2..32ac70e027 100644
--- a/documentation/asciidoc/accessories/camera/filters.adoc
+++ b/documentation/asciidoc/accessories/camera/filters.adoc
@@ -30,48 +30,41 @@ The HQ and GS Cameras use a Hoya CM500 infrared filter. Its transmission charact
image::images/hoyacm500.png[CM500 Transmission Graph,width="65%"]
-== Filter Removal
+== IR Filter
-NOTE: This procedure applies to both the HQ and GS cameras.
+Both the High Quality Camera and Global Shutter Camera contain an IR filter to reduce the camera's sensitivity to infrared light and help outdoor photos look more natural. However, you may remove the filter to:
-WARNING: *This procedure cannot be reversed:* the adhesive that attaches the filter will not survive being lifted and replaced, and while the IR filter is about 1.1mm thick, it may crack when it is removed. *Removing it will void the warranty on the product*. Nevertheless, removing the filter will be desirable to some users.
+* Enhance colours in certain types of photography, such as images of plants, water, and the sky
+* Provide night vision in a location that is illuminated with infrared light
-image:images/FILTER_ON_small.jpg[width="65%"]
-
-Both the High Quality Camera and Global Shutter Camera contain an IR filter, which is used to reduce the camera's sensitivity to infrared light. This ensures that outdoor photos look more natural. However, some nature photography can be enhanced with the removal of this filter; the colours of sky, plants, and water can be affected by its removal. The camera can also be used without the filter for night vision in a location that is illuminated with infrared light.
+=== Filter Removal
-WARNING: Before proceeding read through all of the steps and decide whether you are willing to void your warranty. *Do not proceed* unless you are sure that you are willing to void your warranty.
+WARNING: *This procedure cannot be reversed:* the adhesive that attaches the filter will not survive being lifted and replaced, and while the IR filter is about 1.1mm thick, it may crack when it is removed. *Removing it will void the warranty on the product*.
-To remove the filter:
+You can remove the filter from both the HQ and GS cameras. The HQ camera is shown in the demonstration below.
-* Work in a clean and dust-free environment, as the sensor will be exposed to the air.
+image:images/FILTER_ON_small.jpg[width="65%"]
-* Unscrew the two 1.5 mm hex lock keys on the underside of the main circuit board. Be careful not to let the washers roll away. There is a gasket of slightly sticky material between the housing and PCB which will require some force to separate.
+NOTE: Make sure to work in a clean and dust-free environment, as the sensor will be exposed to the air.
+. Unscrew the two 1.5 mm hex lock keys on the underside of the main circuit board. Be careful not to let the washers roll away.
++
image:images/SCREW_REMOVED_small.jpg[width="65%"]
-
-* Lift up the board and place it down on a very clean surface. Make sure the sensor does not touch the surface.
-
+. There is a gasket of slightly sticky material between the housing and PCB which will require some force to separate. You may try some ways to weaken the adhesive, such as a little isopropyl alcohol and/or heat (~20-30 C).
+. Once the adhesive is loose, lift up the board and place it down on a very clean surface. Make sure the sensor does not touch the surface.
++
image:images/FLATLAY_small.jpg[width="65%"]
-
-* You may try some ways to weaken the adhesive, such as a little isopropyl alcohol and/or heat (~20-30 C).
-
+. Face the lens upwards and place the mount on a flat surface.
++
image:images/SOLVENT_small.jpg[width="65%"]
-
-* Turn the lens mount around so that it is "looking" upwards and place it on a table.
-
-* Using a pen top or similar soft plastic item, push down on the filter only at the very edges where the glass attaches to the aluminium - to minimise the risk of breaking the filter. The glue will break and the filter will detach from the lens mount.
-
+. To minimise the risk of breaking the filter, use a pen top or similar soft plastic item to push down on the filter only at the very edges where the glass attaches to the aluminium. The glue will break and the filter will detach from the lens mount.
++
image:images/REMOVE_FILTER_small.jpg[width="65%"]
-
-* Given that changing lenses will expose the sensor, at this point you could affix a clear filter (for example, OHP plastic) to minimize the chance of dust entering the sensor cavity.
-
-* Replace the main housing over the circuit board. Be sure to realign the housing with the gasket, which remains on the circuit board.
-
-* The nylon washer prevents damage to the circuit board; apply this washer first. Next, fit the steel washer, which prevents damage to the nylon washer.
-
-* Screw down the two hex lock keys. As long as the washers have been fitted in the correct order, they do not need to be screwed very tightly.
-
+. Given that changing lenses will expose the sensor, at this point you could affix a clear filter (for example, OHP plastic) to minimize the chance of dust entering the sensor cavity.
+. Replace the main housing over the circuit board. Be sure to realign the housing with the gasket, which remains on the circuit board.
+. Apply the nylon washer first to prevent damage to the circuit board.
+. Next, fit the steel washer, which prevents damage to the nylon washer. Screw down the two hex lock keys. As long as the washers have been fitted in the correct order, they do not need to be screwed very tightly.
++
image:images/FILTER_OFF_small.jpg[width="65%"]
NOTE: It is likely to be difficult or impossible to glue the filter back in place and return the device to functioning as a normal optical camera.
diff --git a/documentation/asciidoc/accessories/camera/images/FILTER_OFF.jpg b/documentation/asciidoc/accessories/camera/images/FILTER_OFF.jpg
index c9f4778b98..918eb217f2 100644
Binary files a/documentation/asciidoc/accessories/camera/images/FILTER_OFF.jpg and b/documentation/asciidoc/accessories/camera/images/FILTER_OFF.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/FILTER_ON.jpg b/documentation/asciidoc/accessories/camera/images/FILTER_ON.jpg
index 3c728b056f..47abc24c71 100644
Binary files a/documentation/asciidoc/accessories/camera/images/FILTER_ON.jpg and b/documentation/asciidoc/accessories/camera/images/FILTER_ON.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/FLATLAY.jpg b/documentation/asciidoc/accessories/camera/images/FLATLAY.jpg
index ef6512792c..6fad88e33f 100644
Binary files a/documentation/asciidoc/accessories/camera/images/FLATLAY.jpg and b/documentation/asciidoc/accessories/camera/images/FLATLAY.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/REMOVE_FILTER.jpg b/documentation/asciidoc/accessories/camera/images/REMOVE_FILTER.jpg
index bf3d39b58c..845fda2980 100644
Binary files a/documentation/asciidoc/accessories/camera/images/REMOVE_FILTER.jpg and b/documentation/asciidoc/accessories/camera/images/REMOVE_FILTER.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/SCREW_REMOVED.jpg b/documentation/asciidoc/accessories/camera/images/SCREW_REMOVED.jpg
index 6eae6044c5..02cb97774a 100644
Binary files a/documentation/asciidoc/accessories/camera/images/SCREW_REMOVED.jpg and b/documentation/asciidoc/accessories/camera/images/SCREW_REMOVED.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/SOLVENT.jpg b/documentation/asciidoc/accessories/camera/images/SOLVENT.jpg
index d84a386f2b..e7c4e965ae 100644
Binary files a/documentation/asciidoc/accessories/camera/images/SOLVENT.jpg and b/documentation/asciidoc/accessories/camera/images/SOLVENT.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/ai-camera-hero.png b/documentation/asciidoc/accessories/camera/images/ai-camera-hero.png
new file mode 100644
index 0000000000..a0186287cb
Binary files /dev/null and b/documentation/asciidoc/accessories/camera/images/ai-camera-hero.png differ
diff --git a/documentation/asciidoc/accessories/camera/images/m12-lens.jpg b/documentation/asciidoc/accessories/camera/images/m12-lens.jpg
index dae4e2599c..875f0297a2 100644
Binary files a/documentation/asciidoc/accessories/camera/images/m12-lens.jpg and b/documentation/asciidoc/accessories/camera/images/m12-lens.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/resistor.jpg b/documentation/asciidoc/accessories/camera/images/resistor.jpg
index 21a5451bc7..7d9fc1077e 100644
Binary files a/documentation/asciidoc/accessories/camera/images/resistor.jpg and b/documentation/asciidoc/accessories/camera/images/resistor.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_clear_filter.jpg b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_clear_filter.jpg
index 00daf6b070..dc401f9adc 100644
Binary files a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_clear_filter.jpg and b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_clear_filter.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_gasket.jpg b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_gasket.jpg
index b4cd564a4e..572ca31674 100644
Binary files a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_gasket.jpg and b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_gasket.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_ir_filter.jpg b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_ir_filter.jpg
index 8b786511b4..ee09e5b4cd 100644
Binary files a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_ir_filter.jpg and b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_ir_filter.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_sensor.jpg b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_sensor.jpg
index d9792c2f33..40bb170bbe 100644
Binary files a/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_sensor.jpg and b/documentation/asciidoc/accessories/camera/images/rpi_hq_cam_sensor.jpg differ
diff --git a/documentation/asciidoc/accessories/camera/lens.adoc b/documentation/asciidoc/accessories/camera/lens.adoc
index d1b30a18ba..ad461444fe 100644
--- a/documentation/asciidoc/accessories/camera/lens.adoc
+++ b/documentation/asciidoc/accessories/camera/lens.adoc
@@ -14,14 +14,11 @@ We recommend two lenses, a 6mm wide angle lens and a 16mm telephoto lens. These
2+| Resolution | 10MP | 3MP
2+| Image format | 1" | 1/2"
-2+| Aperture | F1.4 to 1.6 | F1.2
+2+| Aperture | F1.4 to F16 | F1.2
2+| Mount | C | CS
-.4+| Field Angle
-| 1" | 44.6°× 33.6°
-.4+| 63°
-| 2/3" | 30.0°× 23.2°
-| 1/1.8" | 24.7°× 18.6°
-| 1/2" | 21.8°× 16.4°
+.2+| Field of View H°×V° (D°)
+| HQ | 22.2°×16.7° (27.8°)| 55°×45° (71°)
+| GS| 17.8°×13.4° (22.3) | 45°×34° (56°)
2+| Back focal length | 17.53mm | 7.53mm
2+| M.O.D. | 0.2m | 0.2m
2+| Dimensions | φ39.00×50.00mm | φ30×34mm
@@ -41,5 +38,5 @@ We recommend three lenses manufactured by https://www.gaojiaoptotech.com/[Gaojia
2+| Image format | 1/1.7" | 1/2" | 1/2.3"
2+| Aperture | F1.8 | F2.4 | F2.5
2+| Mount 3+| M12
-2+| Field of View (D/H/V) | 72.64°/57.12°/42.44° | 18.3°/14.7°/11.1° | 184.6°/140°/102.6°
+2+| HQ Field of View H°×V° (D°) | 49°×36° (62°) | 14.4°×10.9° (17.9)° | 140°×102.6° (184.6°)
|===
diff --git a/documentation/asciidoc/accessories/camera/synchronous_cameras.adoc b/documentation/asciidoc/accessories/camera/synchronous_cameras.adoc
index c37b7f4d29..9561864ffb 100644
--- a/documentation/asciidoc/accessories/camera/synchronous_cameras.adoc
+++ b/documentation/asciidoc/accessories/camera/synchronous_cameras.adoc
@@ -1,101 +1,108 @@
== Synchronous Captures
-Both the HQ Camera and the Global Shutter Camera, have support for synchronous captures.
-Making use of the XVS pin (Vertical Sync) allows one camera to pulse when a frame capture is initiated.
-The other camera can then listen for this sync pulse, and capture a frame at the same time as the other camera.
+The High Quality (HQ) Camera supports synchronous captures.
+One camera (the "source") can be configured to generate a pulse on its XVS (Vertical Sync) pin when a frame capture is initiated.
+Other ("sink") cameras can listen for this pulse, and capture a frame at the same time as the source camera.
-=== Using the HQ Camera
+This method is largely superseded by xref:../computers/camera_software.adoc#software-camera-synchronisation[software camera synchronisation] which can operate over long distances without additional wires and has sub-millisecond accuracy. But when cameras are physically close, wired synchronisation may be used.
-For correct operation, both cameras require a 1.65V pull up voltage on the XVS line, which is created by a potential divider through the 3.3V and GND pins on the Raspberry Pi.
+NOTE: Global Shutter (GS) Cameras can also be operated in a synchronous mode. However, the source camera will record one extra frame. Instead, for GS Cameras we recommend using an xref:camera.adoc#external-trigger-on-the-gs-camera[external trigger source]. You cannot synchronise a GS Camera and an HQ Camera.
-image::images/synchronous_camera_wiring.jpg[alt="Image showing potential divider setup",width="50%"]
+=== Connecting the cameras
-Create a potential divider from two 10kΩ resistors to 3.3V and ground (to make 1.65V with an effective source impedence of 5kΩ). This can be connected to either Raspberry Pi.
+Solder a wire to the XVS test point of each camera, and connect them together.
-Solder the GND and XVS test points of each HQ Camera board to each other.
+Solder a wire to the GND test point of each camera, and connect them together.
-Connect the XVS wires to the 1.65V potential divider pull-up.
+*For GS Cameras only,* you will also need to connect the XHS (Horizontal Sync) test point of each camera together. On any GS Camera that you wish to act as a sink, bridge the two halves of the MAS pad with solder.
-==== Boot up both Raspberry Pis
+NOTE: An earlier version of this document recommended an external pull-up for XVS. This is no longer recommended. Instead, ensure you have the latest version of Raspberry Pi OS and set the `always-on` property for all connected cameras.
-The file `/sys/module/imx477/parameters/trigger_mode` determines which board outputs pulses, or waits to receive pulses (source and sink).
-This parameter can only be altered in superuser mode.
+=== Driver configuration
-On the sink, run:
-[,bash]
+You will need to configure the camera drivers to keep their 1.8V power supplies on when not streaming, and optionally to select the source and sink roles.
+
+==== For the HQ Camera
+
+Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`.
+
+Append this line for a source camera:
+[source]
----
-sudo su
-echo 2 > /sys/module/imx477/parameters/trigger_mode
-exit
+dtoverlay=imx477,always-on,sync-source
----
-On the source, run:
-[,bash]
+Or for a sink:
+[source]
----
-sudo su
-echo 1 > /sys/module/imx477/parameters/trigger_mode
-exit
+dtoverlay=imx477,always-on,sync-sink
----
-Start the sink running:
-[,bash]
+When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If two cameras are on the same Raspberry Pi you will need two dtoverlay lines, only one of them ending with `,cam0`.
+
+Alternatively, if you wish to swap the cameras' roles at runtime (and they are not both connected to the same Raspberry Pi), omit `,sync-source` or `,sync-sink` above. Instead you can set a module parameter before starting each camera:
+
+For the Raspbery Pi with the source camera:
+[source,console]
----
-rpicam-vid --frames 300 --qt-preview -o sink.h264
+$ echo 1 | sudo tee /sys/module/imx477/parameters/trigger_mode
----
-Start the source running
-[,bash]
+For the Raspberry Pi with the sink camera:
+[source,console]
----
-rpicam-vid --frames 300 --qt-preview -o source.h264
+$ echo 2 | sudo tee /sys/module/imx477/parameters/trigger_mode
----
+You will need to do this every time the system is booted.
-Frames should be synchronous. Use `--frames` to ensure the same number of frames are captured, and that the recordings are exactly the same length.
-Running the sink first ensures that no frames are missed.
-
-NOTE: The potential divider is needed to pull up the XVS pin to high whilst the source is in an idle state. This ensures that no frames are created or lost upon startup. The source whilst initialising goes from LOW to HIGH which can trigger a false frame.
+==== For the GS Camera
-=== Using the GS Camera
+Edit `/boot/firmware/config.txt`. Change `camera_auto_detect=1` to `camera_auto_detect=0`.
-NOTE: The Global Shutter (GS) camera can also be operated in a synchronous mode. However, the source camera will record one extra frame. A much better alternative method to ensure that both cameras capture the same amount of frames is to use the xref:camera.adoc#external-trigger-on-the-gs-camera[external trigger method].
+For either a source or a sink, append this line:
+[source]
+----
+dtoverlay=imx296,always-on
+----
+When using the CAM0 port on a Raspberry Pi 5, CM4 or CM5, append `,cam0` to that line without a space. If two cameras are on the same Raspberry Pi you will need two dtoverlay lines, only one of them ending with `,cam0`.
-To operate as source and sink together, the Global Shutter Cameras also require connection of the XHS (horizontal sync) pins together. However, these do not need connection to a pullup resistor.
+On the GS Camera, the sink role is enabled by the MAS pin and cannot be configured by software ("trigger_mode" and "sync-sink" relate to the xref:camera.adoc#external-trigger-on-the-gs-camera[external trigger method], and should _not_ be set for this method).
-The wiring setup is identical to the xref:camera.adoc#using-the-hq-camera[HQ Camera method], except that you will also need to connect the XHS pins together.
+=== Libcamera configuration
-Create a potential divider from two 10kΩ resistors to 3.3V and ground (to make 1.65V with an effective source impedence of 5kΩ). This can be connected to either Raspberry Pi.
+If the cameras are not all started within 1 second, the `rpicam` applications can time out. To prevent this, you must edit a configuration file on any Raspberry Pi(s) with sink cameras.
-Solder 2 wires to the XVS test points on each board and connect both of these wires together to the 1.65V potential divider.
+On Raspberry Pi 5 or CM5:
+[source,console]
+----
+$ cp /usr/share/libcamera/pipeline/rpi/pisp/example.yaml timeout.yaml
+----
-Solder the GND of each Camera board to each other. Also solder 2 wires to the XHS test points on each board and connect these. No pullup is needed for XHS pin.
+On other Raspberry Pi models:
+[source,console]
+----
+$ cp /usr/share/libcamera/pipeline/rpi/vc4/rpi_apps.yaml timeout.yaml
+----
-On the boards that you wish to act as sinks, solder the two halves of the MAS pad together. This tells the sensor to act as a sink, and will wait for a signal to capture a frame.
+Now edit the copy. In both cases, delete the `#` (comment) from the `"camera_timeout_value_ms":` line, and change the number to `60000` (60 seconds).
-==== Boot up both Raspberry Pis
+=== Starting the cameras
-Start the sink running:
-[,bash]
-----
-rpicam-vid --frames 300 -o sync.h264
-----
-Allow a delay before you start the source running (see note below). Needs to be roughly > 2 seconds.
+Run the following commands to start the sink:
-Start the source running:
-[,bash]
+[source,console]
----
-rpicam-vid --frames 299 -o sync.h264
+$ export LIBCAMERA_RPI_CONFIG_FILE=timeout.yaml
+$ rpicam-vid --frames 300 --qt-preview -o sink.h264
----
-[NOTE]
-=====
-Due to the limitations of the IMX296 sensor, we are unable to get the sink to record exactly the same amount of frames as the source.
-**The source will record one extra frame before the sink starts recording.** This will need to be accounted for later in the application.
-Because of this, you need to specify that the sink records one less frame in the `--frames` option.
-
-FFmpeg has the ability to resync these two videos. By dropping the first frame from the source, we then get two recordings of the same frame
- length and with the same starting point.
+Wait a few seconds, then run the following command to start the source:
-[,bash]
+[source,console]
----
-ffmpeg -i source.h264 -vf select="gte(n\, 1)" source.h264
+$ rpicam-vid --frames 300 --qt-preview -o source.h264
----
-=====
+Frames should be synchronised. Use `--frames` to ensure the same number of frames are captured, and that the recordings are exactly the same length.
+Running the sink first ensures that no frames are missed.
+
+NOTE: When using the GS camera in synchronous mode, the sink will not record exactly the same number of frames as the source. **The source records one extra frame before the sink starts recording**. Because of this, you need to specify that the sink records one less frame with the `--frames` option.
diff --git a/documentation/asciidoc/accessories/display/display_intro.adoc b/documentation/asciidoc/accessories/display/display_intro.adoc
index 59fa6463b6..d61ea0398b 100644
--- a/documentation/asciidoc/accessories/display/display_intro.adoc
+++ b/documentation/asciidoc/accessories/display/display_intro.adoc
@@ -1,17 +1,17 @@
== Raspberry Pi Touch Display
-The https://www.raspberrypi.com/products/raspberry-pi-touch-display/[Raspberry Pi Touch Display] is an LCD display that connects to the Raspberry Pi using the DSI connector. While the panel is connected, you can use both it, and the the normal HDMI display output at the same time.
+The https://www.raspberrypi.com/products/raspberry-pi-touch-display/[Raspberry Pi Touch Display] is an LCD display that connects to a Raspberry Pi using a DSI connector and GPIO connector.
.The Raspberry Pi 7-inch Touch Display
image::images/display.png[The Raspberry Pi 7-inch Touch Display, width="70%"]
-The Touch Display will function with all models of Raspberry Pi. Although the earliest Raspberry Pi models, which lack appropriate mounting holes, require additional mounting hardware to fit the stand-offs on the display PCB.
+The Touch Display is compatible with all models of Raspberry Pi, except the Zero series and Keyboard series, which lack a DSI connector. The earliest Raspberry Pi models lack appropriate mounting holes, requiring additional mounting hardware to fit the stand-offs on the display PCB.
The display has the following key features:
-* 800×480 RGB LCD display
+* 800×480px RGB LCD display
* 24-bit colour
-* Industrial quality: 140-degree viewing angle horizontal, 130-degree viewing angle vertical
+* Industrial quality: 140 degree viewing angle horizontal, 120 degree viewing angle vertical
* 10-point multi-touch touchscreen
* PWM backlight control and power control over I2C interface
* Metal-framed back with mounting points for Raspberry Pi display conversion board and Raspberry Pi
@@ -29,115 +29,137 @@ The display has the following key features:
* Outer dimensions: 192.96 × 110.76mm
* Viewable area: 154.08 × 85.92mm
-[NOTE]
-====
-If you are using Raspberry Pi OS Bullseye or earlier, you can install an on-screen keyboard by typing `sudo apt install matchbox-keyboard` in a terminal. Additionally you can enable right-click emulation by adding the following section to the `/etc/X11/xorg.conf` file.
-[source]
-----
-Section "InputClass"
- Identifier "calibration"
- Driver "evdev"
- MatchProduct "FT5406 memory based driver"
-
- Option "EmulateThirdButton" "1"
- Option "EmulateThirdButtonTimeout" "750"
- Option "EmulateThirdButtonMoveThreshold" "30"
-EndSection
-----
-
-These features are not available when running Raspberry Pi OS Bookworm.
-====
+=== Mount the Touch Display
-=== Mounting the Touch Display
-
-You can mount a Raspberry Pi to the back of the Touch Display using its stand-offs and then connect the appropriate cables between each device, depending on your use case. You can also mount the Touch Display in a separate chassis if you have one available. The connections remain the same, though you may need longer cables depending on the chassis you use.
+You can mount a Raspberry Pi to the back of the Touch Display using its stand-offs and then connect the appropriate cables. You can also mount the Touch Display in a separate chassis if you have one available. The connections remain the same, though you may need longer cables depending on the chassis.
.A Raspberry Pi connected to the Touch Display
image::images/GPIO_power-500x333.jpg[Image of Raspberry Pi connected to the Touch Display, width="70%"]
Connect one end of the Flat Flexible Cable (FFC) to the `RPI-DISPLAY` port on the Touch Display PCB. The silver or gold contacts should face away from the display. Then connect the other end of the FFC to the `DISPLAY` port on the Raspberry Pi. The contacts on this end should face inward, towards the Raspberry Pi.
-If the FFC isn't fully inserted, or it's not positioned correctly, you will experience issues with the display. You should always double-check this connection when troubleshooting, especially if you don't see anything on your display, or the display is showing a single colour.
+If the FFC is not fully inserted or positioned correctly, you will experience issues with the display. You should always double check this connection when troubleshooting, especially if you don't see anything on your display, or the display shows only a single colour.
NOTE: A https://datasheets.raspberrypi.com/display/7-inch-display-mechanical-drawing.pdf[mechanical drawing] of the Touch Display is available for download.
-=== Powering the Touch Display
+=== Power the Touch Display
-We recommend using the Raspberry Pi's GPIO to provide power to the Touch Display. However, if you want to power the display directly, you can use a separate micro USB power supply to provide power.
+We recommend using the Raspberry Pi's GPIO to provide power to the Touch Display. Alternatively, you can power the display directly with a separate micro USB power supply.
-==== Using the Raspberry Pi
+==== Power from a Raspberry Pi
-To power the Touch Display using a Raspberry Pi, you need to connect two jumper wires between the 5V and GND pins on xref:../computers/raspberry-pi.adoc#gpio-and-the-40-pin-header[Raspberry Pi's GPIO] and the 5V and GND pins on the display, as shown in the following illustration.
+To power the Touch Display using a Raspberry Pi, you need to connect two jumper wires between the 5V and `GND` pins on xref:../computers/raspberry-pi.adoc#gpio[Raspberry Pi's GPIO] and the 5V and `GND` pins on the display, as shown in the following illustration.
-.The location of the display's 5V and GND pins
+.The location of the display's 5V and `GND` pins
image::images/display_plugs.png[Illustration of display pins, width="40%"]
-Before you begin, make sure the Raspberry Pi is powered off and not connected to any power source. Connect one end of the black jumper wire to pin six (GND) on the Raspberry Pi and one end of the red jumper wire to pin two (5V). If pin six isn't available, you can use any other open GND pin to connect the black wire. If pin two isn't available, you can use any other 5V pin to connect the red wire, such as pin four.
+Before you begin, make sure the Raspberry Pi is powered off and not connected to any power source. Connect one end of the black jumper wire to pin six (`GND`) on the Raspberry Pi and one end of the red jumper wire to pin four (5V). If pin six isn't available, you can use any other open `GND` pin to connect the black wire. If pin four isn't available, you can use any other 5V pin to connect the red wire, such as pin two.
.The location of the Raspberry Pi headers
image::images/pi_plugs.png[Illustration of Raspberry Pi headers, width="40%"]
-Next, connect the other end of the black wire to the GND pin on the display and the other end of the red wire to the 5V pin on the display. Once all the connections are made, you should see the Touch Display turn on the next time you turn on your Raspberry Pi.
+Next, connect the other end of the black wire to the `GND` pin on the display and the other end of the red wire to the 5V pin on the display. Once all the connections are made, you should see the Touch Display turn on the next time you turn on your Raspberry Pi.
-The other three pins on the Touch Display are used to connect the display to an original Raspberry Pi 1 Model A or B. Refer to our documentation on xref:display.adoc#legacy-support[legacy support] for more information.
+Use the other three pins on the Touch Display to connect the display to an original Raspberry Pi 1 Model A or B. Refer to our documentation on xref:display.adoc#legacy-support[legacy support] for more information.
-NOTE: An original Raspberry Pi is easily identified compared to other models; it is the only model with a 26-pin rather than 40-pin GPIO header connector.
+NOTE: To identify an original Raspberry Pi, check the GPIO header connector. Only the original model has a 26-pin GPIO header connector; subsequent models have 40 pins.
-==== Using a micro USB supply
+==== Power from a micro USB supply
If you don't want to use a Raspberry Pi to provide power to the Touch Display, you can use a micro USB power supply instead. We recommend using the https://www.raspberrypi.com/products/micro-usb-power-supply/[Raspberry Pi 12.5W power supply] to make sure the display runs as intended.
Do not connect the GPIO pins on your Raspberry Pi to the display if you choose to use micro USB for power. The only connection between the two boards should be the Flat Flexible Cable.
-WARNING: If you use a micro USB cable to power the display it must be mounted inside a chassis that blocks access to the display's PCB while it's in use.
+WARNING: When using a micro USB cable to power the display, mount it inside a chassis that blocks access to the display's PCB during usage.
+
+=== Use an on-screen keyboard
+
+Raspberry Pi OS _Bookworm_ and later include the Squeekboard on-screen keyboard by default. When a touch display is attached, the on-screen keyboard should automatically show when it is possible to enter text and automatically hide when it is not possible to enter text.
-=== Changing the screen orientation
+For applications which do not support text entry detection, use the keyboard icon at the right end of the taskbar to manually show and hide the keyboard.
-If you want to physically rotate the display, or mount it in a specific position, you can use software to adjust the orientation of the screen to better match your setup.
+You can also permanently show or hide the on-screen keyboard in the Display tab of Raspberry Pi Configuration or the `Display` section of `raspi-config`.
-To set the screen orientation from the desktop environment, select **Screen Configuration** from the **Preferences** menu. Right-click on the DSI-1 display rectangle in the layout editor, select **Orientation**, then pick the best option to fit your needs. You can also ensure that the touch overlay is assigned to the correct display with the **Touchscreen** option.
+TIP: In Raspberry Pi OS releases prior to _Bookworm_, use `matchbox-keyboard` instead. If you use the wayfire desktop compositor, use `wvkbd` instead.
+
+=== Change screen orientation
+
+If you want to physically rotate the display, or mount it in a specific position, select **Screen Configuration** from the **Preferences** menu. Right-click on the touch display rectangle (likely DSI-1) in the layout editor, select **Orientation**, then pick the best option to fit your needs.
image::images/display-rotation.png[Screenshot of orientation options in screen configuration, width="80%"]
-If only using the console and not a desktop environment, you can edit the kernel's `/boot/firmware/cmdline.txt` file to pass the required orientation to the system.
+==== Rotate screen without a desktop
+
+To set the screen orientation on a device that lacks a desktop environment, edit the `/boot/firmware/cmdline.txt` configuration file to pass an orientation to the system. Add the following line to `cmdline.txt`:
+
+[source,ini]
+----
+video=DSI-1:800x480@60,rotate=
+----
-To rotate the console text, add `video=DSI-1:800x480@60,rotate=90` to the `cmdline.txt` configuration file. Make sure everything is on the same line; do not add any carriage returns. Possible rotation values are 0, 90, 180 and 270.
+Replace the `` placeholder with one of the following values, which correspond to the degree of rotation relative to the default on your display:
-NOTE: It is not possible to rotate the DSI display separately from the HDMI display using the command line. If you have both attached they need to be set to the same rotation value.
+* `0`
+* `90`
+* `180`
+* `270`
-Rotation of the touchscreen area is independent of the orientation of the display itself. To change this you need to manually add a `dtoverlay` instruction in the xref:../computers/config_txt.adoc[`/boot/firmware/config.txt`] file,
+For example, a rotation value of `90` rotates the display 90 degrees to the right. `180` rotates the display 180 degrees, or upside-down.
+NOTE: It is not possible to rotate the DSI display separately from the HDMI display with `cmdline.txt`. When you use DSI and HDMI simultaneously, they share the same rotation value.
+
+==== Rotate touch input
+
+WARNING: Rotating touch input via device tree can cause conflicts with your input library. Whenever possible, configure touch event rotation in your input library or desktop.
+
+Rotation of touch input is independent of the orientation of the display itself. To change this you need to manually add a `dtoverlay` instruction in xref:../computers/config_txt.adoc[`/boot/firmware/config.txt`]. Add the following line at the end of `config.txt`:
+
+[source,ini]
----
dtoverlay=vc4-kms-dsi-7inch,invx,invy
----
-and disable the autodetection of the display by removing or commenting out
+Then, disable automatic display detection by removing the following line from `config.txt`, if it exists:
+[source,ini]
----
display_auto_detect=1
----
-The options for the vc4-kms-dsi-7inch overlay are:
+==== Touch Display device tree option reference
+
+The `vc4-kms-dsi-7inch` overlay supports the following options:
|===
| DT parameter | Action
-| sizex
+| `sizex`
| Sets X resolution (default 800)
-| sizey
+| `sizey`
| Sets Y resolution (default 480)
-| invx
+| `invx`
| Invert X coordinates
-| invy
+| `invy`
| Invert Y coordinates
-| swapxy
+| `swapxy`
| Swap X and Y coordinates
-| disable_touch
+| `disable_touch`
| Disables the touch overlay totally
|===
+
+To specify these options, add them, separated by commas, to your `dtoverlay` line in `/boot/firmware/config.txt`. Boolean values default to true when present, but you can set them to false using the suffix "=0". Integer values require a value, e.g. `sizey=240`. For instance, to set the X resolution to 400 pixels and invert both X and Y coordinates, use the following line:
+
+[source,ini]
+----
+dtoverlay=vc4-kms-dsi-7inch,sizex=400,invx,invy
+----
+
+=== Installation on Compute Module based devices.
+
+All Raspberry Pi SBCs auto-detect the official Touch Displays as the circuitry connected to the DSI connector on the Raspberry Pi board is fixed; this autodetection ensures the correct Device Tree entries are passed to the kernel. However, Compute Modules are intended for industrial applications where the integrator can use any and all GPIOs and interfaces for whatever purposes they require. Autodetection is therefore not feasible, and hence is disabled on Compute Module devices. This means that the Device Tree fragments required to set up the display need to be loaded via some other mechanism, which can be either with a dtoverlay entry in config.txt as described above, via a custom base DT file, or if present, a HAT EEPROM.
\ No newline at end of file
diff --git a/documentation/asciidoc/accessories/display/images/display-rotation.png b/documentation/asciidoc/accessories/display/images/display-rotation.png
index 81143b3adc..86eb3a10ba 100755
Binary files a/documentation/asciidoc/accessories/display/images/display-rotation.png and b/documentation/asciidoc/accessories/display/images/display-rotation.png differ
diff --git a/documentation/asciidoc/accessories/display/images/display.png b/documentation/asciidoc/accessories/display/images/display.png
index 2e07d6ea55..dd7ae33612 100644
Binary files a/documentation/asciidoc/accessories/display/images/display.png and b/documentation/asciidoc/accessories/display/images/display.png differ
diff --git a/documentation/asciidoc/accessories/display/images/pi_plugs.png b/documentation/asciidoc/accessories/display/images/pi_plugs.png
index db5ec3f104..44f607d74d 100644
Binary files a/documentation/asciidoc/accessories/display/images/pi_plugs.png and b/documentation/asciidoc/accessories/display/images/pi_plugs.png differ
diff --git a/documentation/asciidoc/accessories/display/legacy.adoc b/documentation/asciidoc/accessories/display/legacy.adoc
index 9d17002b47..eab11d275d 100644
--- a/documentation/asciidoc/accessories/display/legacy.adoc
+++ b/documentation/asciidoc/accessories/display/legacy.adoc
@@ -1,16 +1,14 @@
== Legacy Support
-WARNING: These instructions are for the original Raspberry Pi, Model A, and B, boards only. You can identify an original board as it is the only model with a 26-pin GPIO header. All other models have the now-standard 40-pin connector.
+WARNING: These instructions are for the original Raspberry Pi, Model A, and B, boards only. To identify an original Raspberry Pi, check the GPIO header connector. Only the original model has a 26-pin GPIO header connector; subsequent models have 40 pins.
-The DSI connector on both the Raspberry Pi 1 Model A and B boards does not have the I2C connections required to talk to the touchscreen controller and DSI controller. You can work around this by using the additional set of jumper cables provided with the display kit to wire up the I2C bus on the GPIO pins to the display controller board.
-
-Using the jumper cables, connect SCL/SDA on the GPIO header to the horizontal pins marked SCL/SDA on the display board. We also recommend that you power the Model A/B via the GPIO pins using the jumper cables.
+The DSI connector on both the Raspberry Pi 1 Model A and B boards does not have the I2C connections required to talk to the touchscreen controller and DSI controller. To work around this, use the additional set of jumper cables provided with the display kit. Connect SCL/SDA on the GPIO header to the horizontal pins marked SCL/SDA on the display board. Power the Model A/B via the GPIO pins using the jumper cables.
DSI display autodetection is disabled by default on these boards. To enable detection, add the following line to the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file:
-[source]
+[source,ini]
----
ignore_lcd=0
----
-Power the setup via the `PWR IN` micro-USB connector on the display board. Do not power the setup via the Raspberry Pi's micro-USB port: the input polyfuse's maximum current rating will be exceeded as the display consumes approximately 400mA.
+Power the setup via the `PWR IN` micro-USB connector on the display board. Do not power the setup via the Raspberry Pi's micro-USB port. This will exceed the input polyfuse's maximum current rating, since the display consumes approximately 400mA.
diff --git a/documentation/asciidoc/accessories/keyboard-and-mouse/connecting-things.adoc b/documentation/asciidoc/accessories/keyboard-and-mouse/connecting-things.adoc
index 8078b92ac4..a23011f5c3 100644
--- a/documentation/asciidoc/accessories/keyboard-and-mouse/connecting-things.adoc
+++ b/documentation/asciidoc/accessories/keyboard-and-mouse/connecting-things.adoc
@@ -1,6 +1,6 @@
== Connecting it all Together
-This is the configuration we recommend for using your Raspberry Pi, official keyboard and hub, and official mouse together. The hub on the keyboard ensures easy access to USB drives, and the mouse’s cable is tidy, while being long enough to allow you to use the mouse left- or right-handed.
+This is the configuration we recommend for using your Raspberry Pi, official keyboard and hub, and official mouse together. The hub on the keyboard ensures easy access to USB drives, and the mouse's cable is tidy, while being long enough to allow you to use the mouse left- or right-handed.
image::images/everything.png[width="80%"]
diff --git a/documentation/asciidoc/accessories/keyboard-and-mouse/getting-started-keyboard.adoc b/documentation/asciidoc/accessories/keyboard-and-mouse/getting-started-keyboard.adoc
index fc690f669d..3649738079 100644
--- a/documentation/asciidoc/accessories/keyboard-and-mouse/getting-started-keyboard.adoc
+++ b/documentation/asciidoc/accessories/keyboard-and-mouse/getting-started-keyboard.adoc
@@ -2,7 +2,7 @@
Our official keyboard includes three host USB ports for connecting external devices, such as USB mice, USB drives, and other USB- controlled devices.
-The product’s micro USB port is for connection to the Raspberry Pi. Via the USB hub built into the keyboard, the Raspberry Pi controls, and provides power to, the three USB Type A ports.
+The product's micro USB port is for connection to the Raspberry Pi. Via the USB hub built into the keyboard, the Raspberry Pi controls, and provides power to, the three USB Type A ports.
image::images/back-of-keyboard.png[width="80%"]
diff --git a/documentation/asciidoc/accessories/m2-hat-plus.adoc b/documentation/asciidoc/accessories/m2-hat-plus.adoc
new file mode 100644
index 0000000000..b9501e9370
--- /dev/null
+++ b/documentation/asciidoc/accessories/m2-hat-plus.adoc
@@ -0,0 +1 @@
+include::m2-hat-plus/about.adoc[]
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/about.adoc b/documentation/asciidoc/accessories/m2-hat-plus/about.adoc
new file mode 100644
index 0000000000..a3b033a28d
--- /dev/null
+++ b/documentation/asciidoc/accessories/m2-hat-plus/about.adoc
@@ -0,0 +1,141 @@
+[[m2-hat-plus]]
+== About
+
+.The Raspberry Pi M.2 HAT+
+image::images/m2-hat-plus.jpg[width="80%"]
+
+The Raspberry Pi M.2 HAT+ M Key enables you to connect M.2 peripherals such as NVMe drives and other PCIe accessories to Raspberry Pi 5's PCIe interface.
+
+The M.2 HAT+ adapter board converts between the PCIe connector on Raspberry Pi 5 and a single M.2 M key edge connector. You can connect any device that uses the 2230 or 2242 form factors. The M.2 HAT+ can supply up to 3A of power.
+
+The M.2 HAT+ uses Raspberry Pi's https://datasheets.raspberrypi.com/hat/hat-plus-specification.pdf[HAT+ specification], which allows Raspberry Pi OS to automatically detect the HAT+ and any connected devices.
+
+The included threaded spacers provide ample room to fit the Raspberry Pi Active Cooler beneath an M.2 HAT+.
+
+The M.2 HAT+ is _only_ compatible with the https://www.raspberrypi.com/products/raspberry-pi-5-case/[Raspberry Pi Case for Raspberry Pi 5] _if you remove the lid and the included fan_.
+
+== Features
+
+* Single-lane PCIe 2.0 interface (500 MB/s peak transfer rate)
+* Supports devices that use the M.2 M key edge connector
+* Supports devices with the 2230 or 2242 form factor
+* Supplies up to 3A to connected M.2 devices
+* Power and activity LEDs
+* Conforms to the https://datasheets.raspberrypi.com/hat/hat-plus-specification.pdf[Raspberry Pi HAT+ specification]
+* Includes:
+** ribbon cable
+** 16mm GPIO stacking header
+** 4 threaded spacers
+** 8 screws
+** 1 knurled double-flanged drive attachment screw to secure and support the M.2 peripheral
+
+[[m2-hat-plus-installation]]
+== Install
+
+To use the Raspberry Pi M.2 HAT+, you will need:
+
+* a Raspberry Pi 5
+
+Each M.2 HAT+ comes with a ribbon cable, GPIO stacking header, and mounting hardware. Complete the following instructions to install your M.2 HAT+:
+
+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update:
++
+[source,console]
+----
+$ sudo apt update && sudo apt full-upgrade
+----
+
+. Next, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date]. Run the following command to see what firmware you're running:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update
+----
++
+If you see December 6, 2023 or a later date, proceed to the next step. If you see a date earlier than December 6, 2023, run the following command to open the Raspberry Pi Configuration CLI:
++
+[source,console]
+----
+$ sudo raspi-config
+----
++
+Under `Advanced Options` > `Bootloader Version`, choose `Latest`. Then, exit `raspi-config` with `Finish` or the *Escape* key.
++
+Run the following command to update your firmware to the latest version:
++
+[source,console]
+----
+$ sudo rpi-eeprom-update -a
+----
++
+Then, reboot with `sudo reboot`.
+
+. Disconnect the Raspberry Pi from power before beginning installation.
+
+
+. The M.2 HAT+ is compatible with the Raspberry Pi 5 Active Cooler. If you have an Active Cooler, install it before installing the M.2 HAT+.
++
+--
+image::images/m2-hat-plus-installation-01.png[width="60%"]
+--
+. Install the spacers using four of the provided screws. Firmly press the GPIO stacking header on top of the Raspberry Pi GPIO pins; orientation does not matter as long as all pins fit into place. Disconnect the ribbon cable from the M.2 HAT+, and insert the other end into the PCIe port of your Raspberry Pi. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing inward, towards the USB ports. With the ribbon cable fully and evenly inserted into the PCIe port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
++
+--
+image::images/m2-hat-plus-installation-02.png[width="60%"]
+--
+. Set the M.2 HAT+ on top of the spacers, and use the four remaining screws to secure it in place.
++
+--
+image::images/m2-hat-plus-installation-03.png[width="60%"]
+--
+. Insert the ribbon cable into the slot on the M.2 HAT+. Lift the ribbon cable holder from both sides, then insert the cable with the copper contact points facing up. With the ribbon cable fully and evenly inserted into the port, push the cable holder down from both sides to secure the ribbon cable firmly in place.
++
+--
+image::images/m2-hat-plus-installation-04.png[width="60%"]
+--
+. Remove the drive attachment screw by turning the screw counter-clockwise. Insert your M.2 SSD into the M.2 key edge connector, sliding the drive into the slot at a slight upward angle. Do not force the drive into the slot: it should slide in gently.
++
+--
+image::images/m2-hat-plus-installation-05.png[width="60%"]
+--
+. Push the notch on the drive attachment screw into the slot at the end of your M.2 drive. Push the drive flat against the M.2 HAT+, and insert the SSD attachment screw by turning the screw clockwise until the SSD feels secure. Do not over-tighten the screw.
++
+--
+image::images/m2-hat-plus-installation-06.png[width="60%"]
+--
+. Congratulations, you have successfully installed the M.2 HAT+. Connect your Raspberry Pi to power; Raspberry Pi OS will automatically detect the M.2 HAT+. If you use Raspberry Pi Desktop, you should see an icon representing the drive on your desktop. If you don't use a desktop, you can find the drive at `/dev/nvme0n1`. To make your drive automatically available for file access, consider xref:../computers/configuration.adoc#automatically-mount-a-storage-device[configuring automatic mounting].
++
+--
+image::images/m2-hat-plus-installation-07.png[width="60%"]
+--
+
+WARNING: Always disconnect your Raspberry Pi from power before connecting or disconnecting a device from the M.2 slot.
+
+== Boot from NVMe
+
+To boot from an NVMe drive attached to the M.2 HAT+, complete the following steps:
+
+. xref:../computers/getting-started.adoc#raspberry-pi-imager[Format your NVMe drive using Raspberry Pi Imager]. You can do this from your Raspberry Pi if you already have an SD card with a Raspberry Pi OS image.
+. Boot your Raspberry Pi into Raspberry Pi OS using an SD card or USB drive to alter the boot order in the persistent on-board EEPROM configuration.
+. In a terminal on your Raspberry Pi, run `sudo raspi-config` to open the Raspberry Pi Configuration CLI.
+. Under `Advanced Options` > `Boot Order`, choose `NVMe/USB boot`. Then, exit `raspi-config` with `Finish` or the *Escape* key.
+. Reboot your Raspberry Pi with `sudo reboot`.
+
+For more information, see xref:../computers/raspberry-pi.adoc#nvme-ssd-boot[NVMe boot].
+
+== Enable PCIe Gen 3
+
+WARNING: The Raspberry Pi 5 is not certified for Gen 3.0 speeds. PCIe Gen 3.0 connections may be unstable.
+
+To enable PCIe Gen 3 speeds, follow the instructions at xref:../computers/raspberry-pi.adoc#pcie-gen-3-0[enable PCIe Gen 3.0].
+
+== Schematics
+
+.Schematics for the Raspberry Pi M.2 HAT+
+image::images/m2-hat-plus-schematics.png[width="80%"]
+
+Schematics are also available as a https://datasheets.raspberrypi.com/m2-hat-plus/raspberry-pi-m2-hat-plus-schematics.pdf[PDF].
+
+== Product brief
+
+For more information about the M.2 HAT+, including mechanical specifications and operating environment limitations, see the https://datasheets.raspberrypi.com/m2-hat-plus/raspberry-pi-m2-hat-plus-product-brief.pdf[product brief].
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-01.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-01.png
new file mode 100644
index 0000000000..89eda454cd
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-01.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-02.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-02.png
new file mode 100644
index 0000000000..b11d07a459
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-02.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-03.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-03.png
new file mode 100644
index 0000000000..c11a504ee0
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-03.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-04.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-04.png
new file mode 100644
index 0000000000..ae6e321dce
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-04.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-05.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-05.png
new file mode 100644
index 0000000000..0a93df849d
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-05.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-06.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-06.png
new file mode 100644
index 0000000000..209ec6cbcd
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-06.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-07.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-07.png
new file mode 100644
index 0000000000..238b75df86
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-installation-07.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-schematics.png b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-schematics.png
new file mode 100644
index 0000000000..5d0688fbd5
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus-schematics.png differ
diff --git a/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus.jpg b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus.jpg
new file mode 100644
index 0000000000..30a05a30b2
Binary files /dev/null and b/documentation/asciidoc/accessories/m2-hat-plus/images/m2-hat-plus.jpg differ
diff --git a/documentation/asciidoc/accessories/monitor.adoc b/documentation/asciidoc/accessories/monitor.adoc
new file mode 100644
index 0000000000..b100eb439b
--- /dev/null
+++ b/documentation/asciidoc/accessories/monitor.adoc
@@ -0,0 +1 @@
+include::monitor/monitor_intro.adoc[]
diff --git a/documentation/asciidoc/accessories/monitor/images/drill-hole-template.pdf b/documentation/asciidoc/accessories/monitor/images/drill-hole-template.pdf
new file mode 100644
index 0000000000..1d77318e3c
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/drill-hole-template.pdf differ
diff --git a/documentation/asciidoc/accessories/monitor/images/drill-hole-template.png b/documentation/asciidoc/accessories/monitor/images/drill-hole-template.png
new file mode 100644
index 0000000000..a1553774a8
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/drill-hole-template.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.pdf b/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.pdf
new file mode 100644
index 0000000000..d74544841f
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.pdf differ
diff --git a/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.png b/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.png
new file mode 100644
index 0000000000..41faee30b5
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/mechanical-drawing.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/monitor-hero.png b/documentation/asciidoc/accessories/monitor/images/monitor-hero.png
new file mode 100644
index 0000000000..dbaa5f56dc
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/monitor-hero.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/no-hdmi.png b/documentation/asciidoc/accessories/monitor/images/no-hdmi.png
new file mode 100644
index 0000000000..408ad418ba
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/no-hdmi.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/no-valid-hdmi-signal-standby.png b/documentation/asciidoc/accessories/monitor/images/no-valid-hdmi-signal-standby.png
new file mode 100644
index 0000000000..2c03121189
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/no-valid-hdmi-signal-standby.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/not-supported-resolution.png b/documentation/asciidoc/accessories/monitor/images/not-supported-resolution.png
new file mode 100644
index 0000000000..5334217389
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/not-supported-resolution.png differ
diff --git a/documentation/asciidoc/accessories/monitor/images/power-saving-mode.png b/documentation/asciidoc/accessories/monitor/images/power-saving-mode.png
new file mode 100644
index 0000000000..106694ee18
Binary files /dev/null and b/documentation/asciidoc/accessories/monitor/images/power-saving-mode.png differ
diff --git a/documentation/asciidoc/accessories/monitor/monitor_intro.adoc b/documentation/asciidoc/accessories/monitor/monitor_intro.adoc
new file mode 100644
index 0000000000..ae747671ac
--- /dev/null
+++ b/documentation/asciidoc/accessories/monitor/monitor_intro.adoc
@@ -0,0 +1,119 @@
+== Raspberry Pi Monitor
+
+The https://www.raspberrypi.com/products/raspberry-pi-monitor/[Raspberry Pi Monitor] is a 15.6" 1920 × 1080p IPS LCD display that connects to a computer using an HDMI cable. The Monitor also requires a USB-C power source. For full brightness and volume range, this must be a USB-PD source capable of at least 1.5A of current.
+
+.The Raspberry Pi Monitor
+image::images/monitor-hero.png[The Raspberry Pi Monitor, width="100%"]
+
+The Monitor is compatible with all models of Raspberry Pi that support HDMI output.
+
+=== Controls
+
+The back of the Monitor includes the following controls:
+
+* a button that enters and exits Standby mode (indicated by the ⏻ (power) symbol)
+* buttons that increase and decrease display brightness (indicated by the 🔆 (sun) symbol)
+* buttons that increase and decrease speaker volume (indicated by the 🔈 (speaker) symbol)
+
+=== On screen display messages
+
+The on-screen display (OSD) may show the following messages:
+
+[cols="1a,6"]
+|===
+| Message | Description
+
+| image::images/no-hdmi.png[No HDMI signal detected]
+| No HDMI signal detected.
+
+| image::images/no-valid-hdmi-signal-standby.png[Standby mode]
+| The monitor will soon enter standby mode to conserve power.
+
+| image::images/not-supported-resolution.png[Unsupported resolution]
+| The output display resolution of the connected device is not supported.
+
+| image::images/power-saving-mode.png[Power saving mode]
+| The monitor is operating in Power Saving mode, with reduced brightness and volume, because the monitor is not connected to a power supply capable of delivering 1.5A of current or greater.
+|===
+
+Additionally, the OSD shows information about display brightness changes using the 🔆 (sun) symbol, and speaker volume level changes using the 🔈 (speaker) symbol. Both brightness and volume use a scale that ranges from 0 to 100.
+
+TIP: If you attempt to exit Standby mode when the display cannot detect an HDMI signal, the red LED beneath the Standby button will briefly light, but the display will remain in Standby mode.
+
+=== Position the Monitor
+
+Use the following approaches to position the Monitor:
+
+* Angle the Monitor on the integrated stand.
+* Mount the Monitor on an arm or stand using the four VESA mount holes on the back of the red rear plastic housing.
++
+IMPORTANT: Use spacers to ensure adequate space for display and power cable egress.
+* Flip the integrated stand fully upwards, towards the top of the monitor. Use the drill hole template to create two mounting points spaced 55mm apart. Hang the Monitor using the slots on the back of the integrated stand.
++
+.Drill hole template
+image::images/drill-hole-template.png[Drill hole template, width="40%"]
+
+=== Power the Monitor
+
+The Raspberry Pi Monitor draws power from a 5V https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Delivery[USB Power Delivery] (USB-PD) power source. Many USB-C power supplies, including the official power supplies for the Raspberry Pi 4 and Raspberry Pi 5, support this standard.
+
+When using a power source that provides at least 1.5A of current over USB-PD, the Monitor operates in **Full Power mode**. In Full Power mode, you can use the full range (0%-100%) of display brightness and speaker volume.
+
+When using a power source that does _not_ supply at least 1.5A of current over USB-PD (including all USB-A power sources), the Monitor operates in **Power Saving mode**. Power Saving mode limits the maximum display brightness and the maximum speaker volume to ensure reliable operation. In Power Saving mode, you can use a limited range (0-50%) of display brightness and a limited range (0-60%) of speaker volume. When powered from a Raspberry Pi, the Monitor operates in Power Saving mode, since Raspberry Pi devices cannot provide 1.5A of current over a USB-A connection.
+
+To switch from Power Saving mode to Full Power mode, press and hold the *increase brightness* button for 3 seconds.
+
+[TIP]
+====
+If the Monitor flashes on and off, your USB power supply is not capable of providing sufficient current to power the monitor. This can happen if you power the Monitor from a Raspberry Pi 5 or Pi 500 which is itself powered by a 5V/3A power supply. Try the following fixes to stop the Monitor from flashing on and off:
+
+* reduce the display brightness and volume (you may have to connect your monitor to another power supply to access the settings)
+* switch to a different power source or cable
+
+====
+
+=== Specification
+
+Diagonal: 15.6"
+
+Resolution: 1920 × 1080
+
+Type: IPS LCD
+
+Colour gamut: 45%
+
+Contrast: 800:1
+
+Brightness: 250cd/m^2^
+
+Screen coating: Anti-glare 3H hardness
+
+Display area: 344 × 193mm
+
+Dimensions: 237 × 360 × 20mm
+
+Weight: 850g
+
+Supported resolutions:
+
+* 1920 × 1080p @ 50/60Hz
+* 1280 × 720p @ 50/60Hz
+* 720 × 576p @ 50/60Hz
+* 720 × 480p @ 50/60Hz
+* 640 × 480p @ 50/60Hz
+
+Input: HDMI 1.4; supports DDC-CI
+
+Power input: USB-C; requires 1.5A over USB-PD at 5V for full brightness and volume range
+
+Power consumption: 4.5-6.5W during use; < 0.1W at idle
+
+Speakers: 2 × 1.2W (stereo)
+
+Ports: 3.5mm audio jack
+
+
+=== Mechanical drawing
+
+.Mechanical Drawing
+image::images/mechanical-drawing.png[Mechanical drawing, width="80%"]
diff --git a/documentation/asciidoc/accessories/sd-cards.adoc b/documentation/asciidoc/accessories/sd-cards.adoc
new file mode 100644
index 0000000000..ffdb0161ae
--- /dev/null
+++ b/documentation/asciidoc/accessories/sd-cards.adoc
@@ -0,0 +1 @@
+include::sd-cards/about.adoc[]
diff --git a/documentation/asciidoc/accessories/sd-cards/about.adoc b/documentation/asciidoc/accessories/sd-cards/about.adoc
new file mode 100644
index 0000000000..1d8f41170c
--- /dev/null
+++ b/documentation/asciidoc/accessories/sd-cards/about.adoc
@@ -0,0 +1,37 @@
+== About
+
+.A Raspberry Pi SD Card inserted into a Raspberry Pi 5
+image::images/sd-hero.jpg[width="80%"]
+
+SD card quality is a critical factor in determining the overall user experience for a Raspberry Pi. Slow bus speeds and lack of command queueing can reduce the performance of even the most powerful Raspberry Pi models.
+
+Raspberry Pi's official microSD cards support DDR50 and SDR104 bus speeds. Additionally, Raspberry Pi SD cards support the command queueing (CQ) extension, which permits some pipelining of random read operations, ensuring optimal performance.
+
+You can even buy Raspberry Pi SD cards pre-programmed with the latest version of Raspberry Pi OS.
+
+Raspberry Pi SD cards are available in the following sizes:
+
+* 32GB
+* 64GB
+* 128GB
+
+== Specifications
+
+.A 128GB Raspberry Pi SD Card
+image::images/sd-cards.png[width="80%"]
+
+Raspberry Pi SD cards use the SD6.1 SD specification.
+
+Raspberry Pi SD cards use the microSDHC/microSDXC form factor.
+
+Raspberry Pi SD cards have the following Speed Class ratings: C10, U3, V30, A2.
+
+The following table describes the read and write speeds of Raspberry Pi SD cards using 4KB of random data:
+
+|===
+| Raspberry Pi Model | Interface | Read Speed | Write Speed
+
+| 4 | DDR50 | 3,200 IOPS | 1,200 IOPS
+| 5 | SDR104 | 5,000 IOPS | 2,000 IOPS
+|===
+
diff --git a/documentation/asciidoc/accessories/sd-cards/images/sd-cards.png b/documentation/asciidoc/accessories/sd-cards/images/sd-cards.png
new file mode 100644
index 0000000000..9651ba9594
Binary files /dev/null and b/documentation/asciidoc/accessories/sd-cards/images/sd-cards.png differ
diff --git a/documentation/asciidoc/accessories/sd-cards/images/sd-hero.jpg b/documentation/asciidoc/accessories/sd-cards/images/sd-hero.jpg
new file mode 100644
index 0000000000..7597450399
Binary files /dev/null and b/documentation/asciidoc/accessories/sd-cards/images/sd-hero.jpg differ
diff --git a/documentation/asciidoc/accessories/sense-hat.adoc b/documentation/asciidoc/accessories/sense-hat.adoc
index 1f02946767..c0db67f2bb 100644
--- a/documentation/asciidoc/accessories/sense-hat.adoc
+++ b/documentation/asciidoc/accessories/sense-hat.adoc
@@ -1,6 +1,5 @@
-
include::sense-hat/intro.adoc[]
-include::sense-hat/software.adoc[]
-
include::sense-hat/hardware.adoc[]
+
+include::sense-hat/software.adoc[]
diff --git a/documentation/asciidoc/accessories/sense-hat/hardware.adoc b/documentation/asciidoc/accessories/sense-hat/hardware.adoc
index b0ef39b129..735ce713aa 100644
--- a/documentation/asciidoc/accessories/sense-hat/hardware.adoc
+++ b/documentation/asciidoc/accessories/sense-hat/hardware.adoc
@@ -1,4 +1,4 @@
-== Sense HAT hardware
+== Features
The Sense HAT has an 8×8 RGB LED matrix and a five-button joystick, and includes the following sensors:
@@ -10,149 +10,16 @@ The Sense HAT has an 8×8 RGB LED matrix and a five-button joystick, and include
* Humidity
* Colour and brightness
-Schematics and mechanical drawings for the Sense HAT are available for download.
+Schematics and mechanical drawings for the Sense HAT and the Sense HAT V2 are available for download.
-* https://datasheets.raspberrypi.com/sense-hat/sense-hat-schematics.pdf[Sense HAT schematics].
+* https://datasheets.raspberrypi.com/sense-hat/sense-hat-schematics.pdf[Sense HAT V1 schematics].
+* https://datasheets.raspberrypi.com/sense-hat/sense-hat-v2-schematics.pdf[Sense HAT V2 schematics].
* https://datasheets.raspberrypi.com/sense-hat/sense-hat-mechanical-drawing.pdf[Sense HAT mechanical drawings].
=== LED matrix
-The LED matrix is an RGB565 https://www.kernel.org/doc/Documentation/fb/framebuffer.txt[framebuffer] with the id "RPi-Sense FB". The appropriate device node can be written to as a standard file or mmap-ed. The included 'snake' example shows how to access the framebuffer.
+The LED matrix is an RGB565 https://www.kernel.org/doc/Documentation/fb/framebuffer.txt[framebuffer] with the id `RPi-Sense FB`. The appropriate device node can be written to as a standard file or mmap-ed. The included snake example shows how to access the framebuffer.
=== Joystick
-The joystick comes up as an input event device named "Raspberry Pi Sense HAT Joystick", mapped to the arrow keys and `Enter`. It should be supported by any library which is capable of handling inputs, or directly through the https://www.kernel.org/doc/Documentation/input/input.txt[evdev interface]. Suitable libraries include SDL, http://www.pygame.org/docs/[pygame] and https://python-evdev.readthedocs.org/en/latest/[python-evdev]. The included 'snake' example shows how to access the joystick directly.
-
-== Hardware calibration
-
-Install the necessary software and run the calibration program as follows:
-
-[,bash]
-----
-$ sudo apt update
-$ sudo apt install octave -y
-$ cd
-$ cp /usr/share/librtimulib-utils/RTEllipsoidFit ./ -a
-$ cd RTEllipsoidFit
-$ RTIMULibCal
-----
-
-You will then see this menu:
-
-----
-Options are:
-
- m - calibrate magnetometer with min/max
- e - calibrate magnetometer with ellipsoid (do min/max first)
- a - calibrate accelerometers
- x - exit
-
-Enter option:
-----
-
-Press lowercase `m`. The following message will then show. Press any key to start.
-
-----
- Magnetometer min/max calibration
- --------------------------------
- Waggle the IMU chip around, ensuring that all six axes
- (+x, -x, +y, -y and +z, -z) go through their extrema.
- When all extrema have been achieved, enter 's' to save, 'r' to reset
- or 'x' to abort and discard the data.
-
- Press any key to start...
-----
-
-After it starts, you will see something similar to this scrolling up the screen:
-
-----
- Min x: 51.60 min y: 69.39 min z: 65.91
- Max x: 53.15 max y: 70.97 max z: 67.97
-----
-
-Focus on the two lines at the very bottom of the screen, as these are the most recently posted measurements from the program.
-
-Now, pick up the Raspberry Pi and Sense HAT and move it around in every possible way you can think of. It helps if you unplug all non-essential cables to avoid clutter.
-
-Try and get a complete circle in each of the pitch, roll and yaw axes. Take care not to accidentally eject the SD card while doing this. Spend a few minutes moving the Sense HAT, and stop when you find that the numbers are not changing anymore.
-
-Now press lowercase `s` then lowercase `x` to exit the program. If you run the `ls` command now, you'll see a new `RTIMULib.ini` file has been created.
-
-In addition to those steps, you can also do the ellipsoid fit by performing the steps above, but pressing `e` instead of `m`.
-
-When you're done, copy the resulting `RTIMULib.ini` to /etc/ and remove the local copy in `~/.config/sense_hat/`:
-
-[,bash]
-----
-$ rm ~/.config/sense_hat/RTIMULib.ini
-$ sudo cp RTIMULib.ini /etc
-----
-
-== Reading and writing EEPROM data
-
-Enable I2C0 and I2C1 by adding the following line to the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file:
-
-----
- dtparam=i2c_vc=on
- dtparam=i2c_arm=on
-----
-
-Enter the following command to reboot:
-
-[,bash]
-----
- sudo systemctl reboot
-----
-
-Download and build the flash tool:
-
-[,bash]
-----
-$ git clone https://github.com/raspberrypi/hats.git
-$ cd hats/eepromutils
-$ make
-----
-
-NOTE: These steps may not work on Raspberry Pi 2 Model B Rev 1.0 and Raspberry Pi 3 Model B boards. The firmware will take control of I2C0, causing the ID pins to be configured as inputs.
-
-=== Reading
-
-EEPROM data can be read with the following command:
-
-[,bash]
-----
-$ sudo ./eepflash.sh -f=sense_read.eep -t=24c32 -r
-----
-
-=== Writing
-
-Download EEPROM settings and build the `.eep` binary:
-
-[,bash]
-----
-$ wget https://github.com/raspberrypi/rpi-sense/raw/master/eeprom/eeprom_settings.txt -O sense_eeprom.txt
- ./eepmake sense_eeprom.txt sense.eep /boot/firmware/overlays/rpi-sense-overlay.dtb
-----
-
-Disable write protection:
-
-[,bash]
-----
-$ i2cset -y -f 1 0x46 0xf3 1
-----
-
-Write the EEPROM data:
-
-[,bash]
-----
-$ sudo ./eepflash.sh -f=sense.eep -t=24c32 -w
-----
-
-Re-enable write protection:
-
-[,bash]
-----
- i2cset -y -f 1 0x46 0xf3 0
-----
-
-WARNING: This operation will not damage your Raspberry Pi or Sense Hat, but if an error occurs, the HAT may no longer be automatically detected. The steps above are provided for debugging purposes only.
+The joystick comes up as an input event device named `Raspberry Pi Sense HAT Joystick`, mapped to the arrow keys and **Enter**. It should be supported by any library which is capable of handling inputs, or directly through the https://www.kernel.org/doc/Documentation/input/input.txt[evdev interface]. Suitable libraries include SDL, http://www.pygame.org/docs/[pygame] and https://python-evdev.readthedocs.org/en/latest/[python-evdev]. The included `snake` example shows how to access the joystick directly.
diff --git a/documentation/asciidoc/accessories/sense-hat/images/Sense-HAT.jpg b/documentation/asciidoc/accessories/sense-hat/images/Sense-HAT.jpg
index ef74aa37a1..e1eebd815d 100644
Binary files a/documentation/asciidoc/accessories/sense-hat/images/Sense-HAT.jpg and b/documentation/asciidoc/accessories/sense-hat/images/Sense-HAT.jpg differ
diff --git a/documentation/asciidoc/accessories/sense-hat/intro.adoc b/documentation/asciidoc/accessories/sense-hat/intro.adoc
index ebf2c15c51..01f8a2425a 100644
--- a/documentation/asciidoc/accessories/sense-hat/intro.adoc
+++ b/documentation/asciidoc/accessories/sense-hat/intro.adoc
@@ -1,9 +1,9 @@
-== Introducing the Sense HAT
+== About
-The https://www.raspberrypi.com/products/sense-hat/[Raspberry Pi Sense HAT] is an add-on board that gives your Raspberry Pi an array of sensing capabilities. The on-board sensors allow you to monitor pressure, humidity, temperature, colour, orientation, and movement. The bright 8×8 RGB LED matrix allows you to visualise data from the sensors, and the five-button joystick lets users interact with your projects.
+The https://www.raspberrypi.com/products/sense-hat/[Raspberry Pi Sense HAT] is an add-on board that gives your Raspberry Pi an array of sensing capabilities. The on-board sensors allow you to monitor pressure, humidity, temperature, colour, orientation, and movement. The 8×8 RGB LED matrix allows you to visualise data from the sensors. The five-button joystick lets users interact with your projects.
image::images/Sense-HAT.jpg[width="70%"]
-The Sense HAT was originally developed for use on the International Space Station, as part of the educational https://astro-pi.org/[Astro Pi] programme run by the https://raspberrypi.org[Raspberry Pi Foundation] in partnership with the https://www.esa.int/[European Space Agency]. It is well suited to many projects that require position, motion, orientation, or environmental sensing. The Sense HAT is powered by the Raspberry Pi computer to which it is connected.
+The Sense HAT was originally developed for use on the International Space Station as part of the educational https://astro-pi.org/[Astro Pi] programme run by the https://raspberrypi.org[Raspberry Pi Foundation] in partnership with the https://www.esa.int/[European Space Agency]. It can help with any project that requires position, motion, orientation, or environmental sensing.
-An officially supported xref:sense-hat.adoc#using-the-sense-hat-with-python[Python library] provides access to all of the on-board sensors, the LED matrix, and the joystick. The Sense HAT is compatible with any Raspberry Pi computer with a 40-pin GPIO header.
+An officially supported xref:sense-hat.adoc#use-the-sense-hat-with-python[Python library] provides access to the on-board sensors, LED matrix, and joystick. The Sense HAT is compatible with any Raspberry Pi device with a 40-pin GPIO header.
diff --git a/documentation/asciidoc/accessories/sense-hat/software.adoc b/documentation/asciidoc/accessories/sense-hat/software.adoc
index 32294cdae0..33261939a2 100644
--- a/documentation/asciidoc/accessories/sense-hat/software.adoc
+++ b/documentation/asciidoc/accessories/sense-hat/software.adoc
@@ -1,46 +1,191 @@
-== Installation
+== Install
-In order to work correctly, the Sense HAT requires an up-to-date kernel, I2C to be enabled, and a few libraries to get started.
+In order to work correctly, the Sense HAT requires:
-Ensure your APT package list is up-to-date:
+* an up-to-date kernel
+* https://en.wikipedia.org/wiki/I%C2%B2C[I2C] enabled on your Raspberry Pi
+* a few dependencies
+
+Complete the following steps to get your Raspberry Pi device ready to connect to the Sense HAT:
+
+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update:
++
+[source,console]
+----
+$ sudo apt update && sudo apt full-upgrade
+----
+
+. Next, install the `sense-hat` package, which will ensure the kernel is up to date, enable I2C, and install the necessary dependencies:
++
+[source,console]
+----
+$ sudo apt install sense-hat
+----
+
+. Finally, reboot your Raspberry Pi to enable I2C and load the new kernel, if it changed:
++
+[source,console]
+----
+$ sudo reboot
+----
+
+== Calibrate
+
+Install the necessary software and run the calibration program as follows:
+
+[source,console]
+----
+$ sudo apt update
+$ sudo apt install octave -y
+$ cd
+$ cp /usr/share/librtimulib-utils/RTEllipsoidFit ./ -a
+$ cd RTEllipsoidFit
+$ RTIMULibCal
+----
+
+The calibration program displays the following menu:
-[,bash]
----
- sudo apt update
+Options are:
+
+ m - calibrate magnetometer with min/max
+ e - calibrate magnetometer with ellipsoid (do min/max first)
+ a - calibrate accelerometers
+ x - exit
+
+Enter option:
+----
+
+Press lowercase `m`. The following message will then show. Press any key to start.
+
+----
+Magnetometer min/max calibration
+-------------------------------
+Waggle the IMU chip around, ensuring that all six axes
+(+x, -x, +y, -y and +z, -z) go through their extrema.
+When all extrema have been achieved, enter 's' to save, 'r' to reset
+or 'x' to abort and discard the data.
+
+Press any key to start...
----
-Next, install the sense-hat package, which will ensure the kernel is up to date, enable I2C, and install the necessary libraries and programs:
+After it starts, you should see output similar to the following scrolling up the screen:
-[,bash]
----
- sudo apt install sense-hat
+Min x: 51.60 min y: 69.39 min z: 65.91
+Max x: 53.15 max y: 70.97 max z: 67.97
----
-Finally, a reboot may be required if I2C was disabled or the kernel was not up-to-date prior to the install:
+Focus on the two lines at the very bottom of the screen, as these are the most recently posted measurements from the program.
+
+Now, pick up the Raspberry Pi and Sense HAT and move it around in every possible way you can think of. It helps if you unplug all non-essential cables to avoid clutter.
+
+Try and get a complete circle in each of the pitch, roll and yaw axes. Take care not to accidentally eject the SD card while doing this. Spend a few minutes moving the Sense HAT, and stop when you find that the numbers are not changing any more.
-[,bash]
+Now press lowercase `s` then lowercase `x` to exit the program. If you run the `ls` command now, you'll see a new `RTIMULib.ini` file has been created.
+
+In addition to those steps, you can also do the ellipsoid fit by performing the steps above, but pressing `e` instead of `m`.
+
+When you're done, copy the resulting `RTIMULib.ini` to `/etc/` and remove the local copy in `~/.config/sense_hat/`:
+
+[source,console]
----
- sudo reboot
+$ rm ~/.config/sense_hat/RTIMULib.ini
+$ sudo cp RTIMULib.ini /etc
----
== Getting started
After installation, example code can be found under `/usr/src/sense-hat/examples`.
-[.booklink, booktype="free", link=https://github.com/raspberrypipress/released-pdfs/raw/main/experiment-with-the-sense-hat.pdf, image=image::images/experiment-with-the-sense-hat.png[]]
-=== Further reading
-You can find more information on how to use the Sense HAT in the Raspberry Pi Press book https://github.com/raspberrypipress/released-pdfs/raw/main/experiment-with-the-sense-hat.pdf[Experiment with the Sense HAT]. Written by The Raspberry Pi Foundation's Education Team, it is part of the MagPi Essentials series published by Raspberry Pi Press. The book covers the background of the Astro Pi project, and walks you through how to make use of all the Sense HAT features using the xref:sense-hat.adoc#using-the-sense-hat-with-python[Python library].
-
-=== Using the Sense HAT with Python
+=== Use the Sense HAT with Python
`sense-hat` is the officially supported library for the Sense HAT; it provides access to all of the on-board sensors and the LED matrix.
Complete documentation for the library can be found at https://sense-hat.readthedocs.io/en/latest/[sense-hat.readthedocs.io].
-=== Using the Sense HAT with {cpp}
+=== Use the Sense HAT with C++
https://github.com/RPi-Distro/RTIMULib[RTIMULib] is a {cpp} and Python library that makes it easy to use 9-dof and 10-dof IMUs with embedded Linux systems. A pre-calibrated settings file is provided in `/etc/RTIMULib.ini`, which is also copied and used by `sense-hat`. The included examples look for `RTIMULib.ini` in the current working directory, so you may wish to copy the file there to get more accurate data.
The RTIMULibDrive11 example comes pre-compiled to help ensure everything works as intended. It can be launched by running `RTIMULibDrive11` and closed by pressing `Ctrl C`.
NOTE: The C/{cpp} examples can be compiled by running `make` in the appropriate directory.
+
+== Troubleshooting
+
+=== Read and write EEPROM data
+
+These steps are provided for debugging purposes only.
+
+NOTE: On Raspberry Pi 2 Model B Rev 1.0 and Raspberry Pi 3 Model B boards, these steps may not work. The firmware will take control of I2C0, causing the ID pins to be configured as inputs.
+
+Before you can read and write EEPROM data to and from the Sense HAT, you must complete the following steps:
+
+. Enable I2C0 and I2C1 by adding the following line to the xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file:
++
+[source,ini]
+----
+dtparam=i2c_vc=on
+dtparam=i2c_arm=on
+----
+
+. Run the following command to reboot:
++
+[source,console]
+----
+$ sudo reboot
+----
+
+. Download and build the flash tool:
++
+[source,console]
+----
+$ git clone https://github.com/raspberrypi/hats.git
+$ cd hats/eepromutils
+$ make
+----
+
+==== Read
+
+To read EEPROM data, run the following command:
+
+[source,console]
+----
+$ sudo ./eepflash.sh -f=sense_read.eep -t=24c32 -r
+----
+
+==== Write
+
+NOTE: This operation will not damage your Raspberry Pi or Sense HAT, but if an error occurs, your Raspberry Pi may fail to automatically detect the HAT.
+
+
+. First, download EEPROM settings and build the `.eep` binary:
++
+[source,console]
+----
+$ wget https://github.com/raspberrypi/rpi-sense/raw/master/eeprom/eeprom_settings.txt -O sense_eeprom.txt
+$ ./eepmake sense_eeprom.txt sense.eep /boot/firmware/overlays/rpi-sense-overlay.dtb
+----
+
+. Next, disable write protection:
++
+[source,console]
+----
+$ i2cset -y -f 1 0x46 0xf3 1
+----
+
+. Write the EEPROM data:
++
+[source,console]
+----
+$ sudo ./eepflash.sh -f=sense.eep -t=24c32 -w
+----
+
+. Finally, re-enable write protection:
++
+[source,console]
+----
+$ i2cset -y -f 1 0x46 0xf3 0
+----
+
diff --git a/documentation/asciidoc/accessories/ssd-kit.adoc b/documentation/asciidoc/accessories/ssd-kit.adoc
new file mode 100644
index 0000000000..2533220b5e
--- /dev/null
+++ b/documentation/asciidoc/accessories/ssd-kit.adoc
@@ -0,0 +1 @@
+include::ssd-kit/about.adoc[]
diff --git a/documentation/asciidoc/accessories/ssd-kit/about.adoc b/documentation/asciidoc/accessories/ssd-kit/about.adoc
new file mode 100644
index 0000000000..390aef6d3f
--- /dev/null
+++ b/documentation/asciidoc/accessories/ssd-kit/about.adoc
@@ -0,0 +1,13 @@
+== About
+
+.A 512GB Raspberry Pi SSD Kit
+image::images/ssd-kit.png[width="80%"]
+
+The Raspberry Pi SSD Kit bundles a xref:../accessories/m2-hat-plus.adoc[Raspberry Pi M.2 HAT+] with a xref:../accessories/ssds.adoc[Raspberry Pi SSD].
+
+The Raspberry Pi SSD Kit includes a 16mm stacking header, spacers, and
+screws to enable fitting on Raspberry Pi 5 alongside a Raspberry Pi Active Cooler.
+
+== Install
+
+To install the Raspberry Pi SSD Kit, follow the xref:../accessories/m2-hat-plus.adoc#m2-hat-plus-installation[installation instructions for the Raspberry Pi M.2 HAT+].
diff --git a/documentation/asciidoc/accessories/ssd-kit/images/ssd-kit.png b/documentation/asciidoc/accessories/ssd-kit/images/ssd-kit.png
new file mode 100644
index 0000000000..9381c5ca12
Binary files /dev/null and b/documentation/asciidoc/accessories/ssd-kit/images/ssd-kit.png differ
diff --git a/documentation/asciidoc/accessories/ssds.adoc b/documentation/asciidoc/accessories/ssds.adoc
new file mode 100644
index 0000000000..3934f0db66
--- /dev/null
+++ b/documentation/asciidoc/accessories/ssds.adoc
@@ -0,0 +1 @@
+include::ssds/about.adoc[]
diff --git a/documentation/asciidoc/accessories/ssds/about.adoc b/documentation/asciidoc/accessories/ssds/about.adoc
new file mode 100644
index 0000000000..abccf00e9e
--- /dev/null
+++ b/documentation/asciidoc/accessories/ssds/about.adoc
@@ -0,0 +1,32 @@
+== About
+
+.A 512GB Raspberry Pi SSD
+image::images/ssd.png[width="80%"]
+
+SSD quality is a critical factor in determining the overall user experience for a Raspberry Pi.
+Raspberry Pi provides official SSDs that are tested to ensure compatibility with Raspberry Pi models and peripherals.
+
+Raspberry Pi SSDs are available in the following sizes:
+
+* 256GB
+* 512GB
+
+To use an SSD with your Raspberry Pi, you need a Raspberry Pi 5-compatible M.2 adapter, such as the xref:../accessories/m2-hat-plus.adoc[Raspberry Pi M.2 HAT+].
+
+== Specifications
+
+Raspberry Pi SSDs are PCIe Gen 3-compliant.
+
+Raspberry Pi SSDs use the NVMe 1.4 register interface and command set.
+
+Raspberry Pi SSDs use the M.2 2230 form factor.
+
+The following table describes the read and write speeds of Raspberry Pi SSDs using 4KB of random data:
+
+[cols="1,2,2"]
+|===
+| Size | Read Speed | Write Speed
+
+| 256GB | 40,000 IOPS | 70,000 IOPS
+| 512GB | 50,000 IOPS | 90,000 IOPS
+|===
diff --git a/documentation/asciidoc/accessories/ssds/images/ssd.png b/documentation/asciidoc/accessories/ssds/images/ssd.png
new file mode 100644
index 0000000000..25bbdc3a7f
Binary files /dev/null and b/documentation/asciidoc/accessories/ssds/images/ssd.png differ
diff --git a/documentation/asciidoc/accessories/touch-display-2.adoc b/documentation/asciidoc/accessories/touch-display-2.adoc
new file mode 100644
index 0000000000..982c35d56a
--- /dev/null
+++ b/documentation/asciidoc/accessories/touch-display-2.adoc
@@ -0,0 +1 @@
+include::touch-display-2/about.adoc[]
diff --git a/documentation/asciidoc/accessories/touch-display-2/about.adoc b/documentation/asciidoc/accessories/touch-display-2/about.adoc
new file mode 100644
index 0000000000..ed4014991b
--- /dev/null
+++ b/documentation/asciidoc/accessories/touch-display-2/about.adoc
@@ -0,0 +1,136 @@
+== About
+
+The https://www.raspberrypi.com/products/touch-display-2/[Raspberry Pi Touch Display 2] is a portrait orientation touchscreen LCD display designed for interactive projects like tablets, entertainment systems, and information dashboards.
+
+.The Raspberry Pi Touch Display 2
+image::images/touch-display-2-hero.jpg[width="80%"]
+
+The Touch Display 2 connects to a Raspberry Pi using a DSI connector and GPIO connector. Raspberry Pi OS provides touchscreen drivers with support for five-finger multitouch and an on-screen keyboard, providing full functionality without the need to connect a keyboard or mouse.
+
+== Specifications
+
+* 1280×720px resolution, 24-bit RGB display
+* 155×88mm active area
+* 7" diagonal
+* powered directly by the host Raspberry Pi, requiring no separate power supply
+* supports up to five points of simultaneous multi-touch
+
+The Touch Display 2 is compatible with all models of Raspberry Pi from Raspberry Pi 1B+ onwards, except the Zero series and Keyboard series, which lack a DSI connector.
+
+The Touch Display 2 box contains the following parts (in left to right, top to bottom order in the image below):
+
+* Touch Display 2
+* eight M2.5 screws
+* 15-way to 15-way FFC
+* 22-way to 15-way FFC for Raspberry Pi 5
+* GPIO connector cable
+
+.Parts included in the Touch Display 2 box
+image::images/touch-display-2-whats-in-the-booooox.jpg["Parts included in the Touch Display 2 box", width="80%"]
+
+== Install
+
+.A Raspberry Pi 5 connected and mounted to the Touch Display 2
+image::images/touch-display-2-installation-diagram.png["A Raspberry Pi 5 connected and mounted to the Touch Display 2", width="80%"]
+
+To connect a Touch Display 2 to a Raspberry Pi, use a Flat Flexible Cable (FFC) and a GPIO connector. The FFC you'll use depends upon your Raspberry Pi model:
+
+* for Raspberry Pi 5, use the included 22-way to 15-way FFC
+* for any other Raspberry Pi model, use the included 15-way to 15-way FFC
+
+Once you have determined the correct FFC for your Raspberry Pi model, complete the following steps to connect your Touch Display 2 to your Raspberry Pi:
+
+. Disconnect your Raspberry Pi from power.
+. Lift the retaining clips on either side of the FFC connector on the Touch Display 2.
+. Insert one 15-way end of your FFC into the Touch Display 2 FFC connector, with the metal contacts facing upwards, away from the Touch Display 2.
++
+TIP: If you use the 22-way to 15-way FFC, the 22-way end is the _smaller_ end of the cable. Insert the _larger_ end of the cable into the Touch Display 2.
+. While holding the FFC firmly in place, simultaneously push both retaining clips down on the FFC connector of the Touch Display 2.
+. Lift the retaining clips on either side of the DSI connector of your Raspberry Pi. This port should be marked with some variation of the term `DISPLAY` or `DISP`. If your Raspberry Pi has multiple DSI connectors, prefer the port labelled `1`.
+. Insert the other end of your FFC into the Raspberry Pi DSI connector, with the metal contacts facing towards the Ethernet and USB-A ports.
+. While holding the FFC firmly in place, simultaneously push both retaining clips down on the DSI connector of the Raspberry Pi.
+. Plug the GPIO connector cable into the port marked `J1` on the Touch Display 2.
+. Connect the other (three-pin) end of the GPIO connector cable to pins 2, 4, and 6 of the xref:../computers/raspberry-pi.adoc#gpio[Raspberry Pi's GPIO]. Connect the red cable (5V power) to pin 2, and the black cable (ground) to pin 6. Viewed from above, with the Ethernet and USB-A ports facing down, these pins are located at the top right of the board, with pin 2 in the top right-most position.
++
+.The GPIO connection to the Touch Display 2
+image::images/touch-display-2-gpio-connection.png[The GPIO connection to the Touch Display 2, width="40%"]
++
+TIP: If pin 6 isn't available, you can use any other open `GND` pin to connect the black wire. If pin 2 isn't available, you can use any other 5V pin to connect the red wire, such as pin 4.
+. Optionally, use the included M2.5 screws to mount your Raspberry Pi to the back of the Touch Display 2.
+.. Align the four corner stand-offs of your Raspberry Pi with the four mount points that surround the FFC connector and `J1` port on the back of the Touch Display 2, taking special care not to pinch the FFC.
+.. Insert the screws into the four corner stand-offs and tighten until your Raspberry Pi is secure.
+. Reconnect your Raspberry Pi to power. It may take up to one minute to initialise the Touch Display 2 connection and begin displaying to the screen.
+
+=== Use an on-screen keyboard
+
+Raspberry Pi OS _Bookworm_ and later include the Squeekboard on-screen keyboard by default. When a touch display is attached, the on-screen keyboard should automatically show when it is possible to enter text and automatically hide when it is not possible to enter text.
+
+For applications which do not support text entry detection, use the keyboard icon at the right end of the taskbar to manually show and hide the keyboard.
+
+You can also permanently show or hide the on-screen keyboard in the Display tab of Raspberry Pi Configuration or the `Display` section of `raspi-config`.
+
+TIP: In Raspberry Pi OS releases prior to _Bookworm_, use `matchbox-keyboard` instead. If you use the wayfire desktop compositor, use `wvkbd` instead.
+
+=== Change screen orientation
+
+If you want to physically rotate the display, or mount it in a specific position, select **Screen Configuration** from the **Preferences** menu. Right-click on the touch display rectangle (likely DSI-1) in the layout editor, select **Orientation**, then pick the best option to fit your needs.
+
+==== Rotate screen without a desktop
+
+To set the screen orientation on a device that lacks a desktop environment, edit the `/boot/firmware/cmdline.txt` configuration file to pass an orientation to the system. Add the following entry to the end of `cmdline.txt`:
+
+[source,ini]
+----
+video=DSI-1:720x1280@60,rotate=
+----
+
+Replace the `` placeholder with one of the following values, which correspond to the degree of rotation relative to the default on your display:
+
+* `0`
+* `90`
+* `180`
+* `270`
+
+For example, a rotation value of `90` rotates the display 90 degrees to the right. `180` rotates the display 180 degrees, or upside-down.
+
+NOTE: It is not possible to rotate the DSI display separately from the HDMI display with `cmdline.txt`. When you use DSI and HDMI simultaneously, they share the same rotation value.
+
+==== Touch Display 2 device tree option reference
+
+The `vc4-kms-dsi-ili9881-7inch` overlay supports the following options:
+
+|===
+| DT parameter | Action
+
+| `sizex`
+| Sets X resolution (default 720)
+
+| `sizey`
+| Sets Y resolution (default 1280)
+
+| `invx`
+| Invert X coordinates
+
+| `invy`
+| Invert Y coordinates
+
+| `swapxy`
+| Swap X and Y coordinates
+
+| `disable_touch`
+| Disables the touch overlay totally
+|===
+
+To specify these options, add them, separated by commas, to your `dtoverlay` line in `/boot/firmware/config.txt`. Boolean values default to true when present, but you can set them to false using the suffix "=0". Integer values require a value, e.g. `sizey=240`. For instance, to set the X resolution to 400 pixels and invert both X and Y coordinates, use the following line:
+
+[source,ini]
+----
+dtoverlay=vc4-kms-dsi-ili9881-7inch,sizex=400,invx,invy
+----
+
+=== Installation and software setup on Compute Module based devices.
+
+All Raspberry Pi SBCs auto-detect the official Touch Displays as the circuitry connected to the DSI connector on the Raspberry Pi board is fixed; this autodetection ensures the correct Device Tree entries are passed to the kernel. However, Compute Modules are intended for industrial applications where the integrator can use any and all GPIOs and interfaces for whatever purposes they require. Autodetection is therefore not feasible, and hence is disabled on Compute Module devices. This means that the Device Tree fragments required to set up the display need to be loaded via some other mechanism, which can be either with a dtoverlay entry in config.txt, via a custom base DT file, or if present, a HAT EEPROM.
+
+Creating a custom base Device tree file is beyond the scope of this documentation, however, it is simple to add an appropriate device tree entry via `config.txt`. See this xref:../computers/compute-module.adoc#attaching-the-touch-display-2-lcd-panel[page] for configuration details.
+
diff --git a/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-gpio-connection.png b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-gpio-connection.png
new file mode 100644
index 0000000000..41e59bc42c
Binary files /dev/null and b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-gpio-connection.png differ
diff --git a/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-hero.jpg b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-hero.jpg
new file mode 100644
index 0000000000..45779c6e24
Binary files /dev/null and b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-hero.jpg differ
diff --git a/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-installation-diagram.png b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-installation-diagram.png
new file mode 100644
index 0000000000..f3167f5e69
Binary files /dev/null and b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-installation-diagram.png differ
diff --git a/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-whats-in-the-booooox.jpg b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-whats-in-the-booooox.jpg
new file mode 100644
index 0000000000..e28fd789c4
Binary files /dev/null and b/documentation/asciidoc/accessories/touch-display-2/images/touch-display-2-whats-in-the-booooox.jpg differ
diff --git a/documentation/asciidoc/accessories/tv-hat.adoc b/documentation/asciidoc/accessories/tv-hat.adoc
index b3724aff54..be04ece4cb 100644
--- a/documentation/asciidoc/accessories/tv-hat.adoc
+++ b/documentation/asciidoc/accessories/tv-hat.adoc
@@ -1 +1 @@
-include::tv-hat/about-tv-hat.adoc[]
\ No newline at end of file
+include::tv-hat/about-tv-hat.adoc[]
diff --git a/documentation/asciidoc/accessories/tv-hat/about-tv-hat.adoc b/documentation/asciidoc/accessories/tv-hat/about-tv-hat.adoc
index f642cb1e7e..e1cb7efa60 100644
--- a/documentation/asciidoc/accessories/tv-hat/about-tv-hat.adoc
+++ b/documentation/asciidoc/accessories/tv-hat/about-tv-hat.adoc
@@ -1,4 +1,5 @@
-== About the TV HAT
+[[tv-hat]]
+== About
.The Raspberry Pi TV HAT
image::images/tv-hat.jpg[width="80%"]
@@ -23,7 +24,8 @@ Digital Video Broadcasting – Terrestrial (DVB-T) is the DVB European-based con
.DTT system implemented or adopted (Source: DVB/EBU/BNE DTT Deployment Database, March 2023)
image::images/dvbt-map.png[width="80%"]
-== Setup Instructions
+[[tv-hat-installation]]
+== Install
Follow our xref:../computers/getting-started.adoc[getting started] documentation and set up the Raspberry Pi with the newest version of Raspberry Pi OS.
@@ -33,7 +35,7 @@ The software we recommend to decode the streams (known as multiplexes, or muxes
Boot your Raspberry Pi and then go ahead open a terminal window, and run the following two commands to install the `tvheadend` software:
-[source, bash]
+[source,console]
----
$ sudo apt update
$ sudo apt install tvheadend
@@ -55,7 +57,7 @@ NOTE: Your local transmitter can be found using the https://www.freeview.co.uk/h
When you click *Save & Next*, the software will start scanning for the selected mux, and will show a progress bar. After about two minutes, you should see something like:
-[source, bash]
+[source,console]
----
Found muxes: 8
Found services: 172
diff --git a/documentation/asciidoc/accessories/usb-3-hub.adoc b/documentation/asciidoc/accessories/usb-3-hub.adoc
new file mode 100644
index 0000000000..44c1bec1ad
--- /dev/null
+++ b/documentation/asciidoc/accessories/usb-3-hub.adoc
@@ -0,0 +1 @@
+include::usb-3-hub/about.adoc[]
diff --git a/documentation/asciidoc/accessories/usb-3-hub/about.adoc b/documentation/asciidoc/accessories/usb-3-hub/about.adoc
new file mode 100644
index 0000000000..c67d1f7708
--- /dev/null
+++ b/documentation/asciidoc/accessories/usb-3-hub/about.adoc
@@ -0,0 +1,17 @@
+== About
+
+The https://www.raspberrypi.com/products/usb-3-hub/[Raspberry Pi USB 3 Hub] provides extra connectivity for your devices, extending one USB-A port into four. An optional external USB-C power input supports high-power peripherals. You can use the USB 3 Hub to power low-power peripherals, such as most mice and keyboards, using no external power.
+
+.The Raspberry Pi USB 3.0 Hub
+image::images/usb-3-hub-hero.png[width="80%"]
+
+== Specification
+
+* 1× upstream USB 3.0 Type-A male connector on 8cm captive cable
+* 4× downstream USB 3.0 Type-A ports
+* Data transfer speeds up to 5Gbps
+* Power transfer up to 900 mA (4.5 W); optional external USB-C power input provides up to 5V @ 3A for high-power downstream peripherals
+* Compatible with USB 3.0 and USB 2.0 Type-A host ports
+
+.Physical specification
+image::images/usb-3-hub-physical-specification.png[]
diff --git a/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-hero.png b/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-hero.png
new file mode 100644
index 0000000000..7f3bc2b9a3
Binary files /dev/null and b/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-hero.png differ
diff --git a/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-physical-specification.png b/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-physical-specification.png
new file mode 100644
index 0000000000..7b469d14c2
Binary files /dev/null and b/documentation/asciidoc/accessories/usb-3-hub/images/usb-3-hub-physical-specification.png differ
diff --git a/documentation/asciidoc/computers/ai.adoc b/documentation/asciidoc/computers/ai.adoc
new file mode 100644
index 0000000000..af8f6182db
--- /dev/null
+++ b/documentation/asciidoc/computers/ai.adoc
@@ -0,0 +1,2 @@
+include::ai/getting-started.adoc[]
+
diff --git a/documentation/asciidoc/computers/ai/getting-started.adoc b/documentation/asciidoc/computers/ai/getting-started.adoc
new file mode 100644
index 0000000000..3a9b7263c0
--- /dev/null
+++ b/documentation/asciidoc/computers/ai/getting-started.adoc
@@ -0,0 +1,219 @@
+== Getting Started
+
+This guide will help you set up a Hailo NPU with your Raspberry Pi 5. This will enable you to run `rpicam-apps` camera demos using an AI neural network accelerator.
+
+=== Prerequisites
+
+For this guide, you will need the following:
+
+* a Raspberry Pi 5
+* one of the following NPUs:
+** a xref:../accessories/ai-kit.adoc[Raspberry Pi AI Kit], which includes:
+*** an M.2 HAT+
+*** a pre-installed Hailo-8L AI module
+** a xref:../accessories/ai-hat-plus.adoc[Raspberry Pi AI HAT+]
+* a 64-bit Raspberry Pi OS Bookworm install
+* any official Raspberry Pi camera (e.g. Camera Module 3 or High Quality Camera)
+
+=== Hardware setup
+
+. Attach the camera to your Raspberry Pi 5 board following the instructions at xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[Install a Raspberry Pi Camera]. You can skip reconnecting your Raspberry Pi to power, because you'll need to disconnect your Raspberry Pi from power for the next step.
+
+. Depending on your NPU, follow the installation instructions for the xref:../accessories/ai-kit.adoc#ai-kit-installation[AI Kit] or xref:../accessories/ai-hat-plus.adoc#ai-hat-plus-installation[AI HAT+], to get your hardware connected to your Raspberry Pi 5.
+
+. Follow the instructions to xref:raspberry-pi.adoc#pcie-gen-3-0[enable PCIe Gen 3.0]. This step is optional, but _highly recommended_ to achieve the best performance with your NPU.
+
+. Install the dependencies required to use the NPU. Run the following command from a terminal window:
++
+[source,console]
+----
+$ sudo apt install hailo-all
+----
++
+This installs the following dependencies:
++
+* Hailo kernel device driver and firmware
+* HailoRT middleware software
+* Hailo Tappas core post-processing libraries
+* The `rpicam-apps` Hailo post-processing software demo stages
+
+. Finally, reboot your Raspberry Pi with `sudo reboot` for these settings to take effect.
+
+. To ensure everything is running correctly, run the following command:
++
+[source,console]
+----
+$ hailortcli fw-control identify
+----
++
+If you see output similar to the following, you've successfully installed the NPU and its software dependencies:
++
+----
+Executing on device: 0000:01:00.0
+Identifying board
+Control Protocol Version: 2
+Firmware Version: 4.17.0 (release,app,extended context switch buffer)
+Logger Version: 0
+Board Name: Hailo-8
+Device Architecture: HAILO8L
+Serial Number: HLDDLBB234500054
+Part Number: HM21LB1C2LAE
+Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
+----
++
+NOTE: AI HAT+ devices may show `` for `Serial Number`, `Part Number` and `Product Name`. This is expected, and does not impact functionality.
++
+Additionally, you can run `dmesg | grep -i hailo` to check the kernel logs, which should yield output similar to the following:
++
+----
+[ 3.049657] hailo: Init module. driver version 4.17.0
+[ 3.051983] hailo 0000:01:00.0: Probing on: 1e60:2864...
+[ 3.051989] hailo 0000:01:00.0: Probing: Allocate memory for device extension, 11600
+[ 3.052006] hailo 0000:01:00.0: enabling device (0000 -> 0002)
+[ 3.052011] hailo 0000:01:00.0: Probing: Device enabled
+[ 3.052028] hailo 0000:01:00.0: Probing: mapped bar 0 - 000000000d8baaf1 16384
+[ 3.052034] hailo 0000:01:00.0: Probing: mapped bar 2 - 000000009eeaa33c 4096
+[ 3.052039] hailo 0000:01:00.0: Probing: mapped bar 4 - 00000000b9b3d17d 16384
+[ 3.052044] hailo 0000:01:00.0: Probing: Force setting max_desc_page_size to 4096 (recommended value is 16384)
+[ 3.052052] hailo 0000:01:00.0: Probing: Enabled 64 bit dma
+[ 3.052055] hailo 0000:01:00.0: Probing: Using userspace allocated vdma buffers
+[ 3.052059] hailo 0000:01:00.0: Disabling ASPM L0s
+[ 3.052070] hailo 0000:01:00.0: Successfully disabled ASPM L0s
+[ 3.221043] hailo 0000:01:00.0: Firmware was loaded successfully
+[ 3.231845] hailo 0000:01:00.0: Probing: Added board 1e60-2864, /dev/hailo0
+----
+
+. To ensure the camera is operating correctly, run the following command:
++
+[source,console]
+----
+$ rpicam-hello -t 10s
+----
++
+This starts the camera and shows a preview window for ten seconds. Once you have verified everything is installed correctly, it's time to run some demos.
+
+=== Demos
+
+The `rpicam-apps` suite of camera applications implements a xref:camera_software.adoc#post-processing-with-rpicam-apps[post-processing framework]. This section contains a few demo post-processing stages that highlight some of the capabilities of the NPU.
+
+The following demos use xref:camera_software.adoc#rpicam-hello[`rpicam-hello`], which by default displays a preview window. However, you can use other `rpicam-apps` instead, including xref:camera_software.adoc#rpicam-vid[`rpicam-vid`] and xref:camera_software.adoc#rpicam-still[`rpicam-still`]. You may need to add or modify some command line options to make the demo commands compatible with alternative applications.
+
+To begin, run the following command to install the latest `rpicam-apps` software package:
+
+[source,console]
+----
+$ sudo apt update && sudo apt install rpicam-apps
+----
+
+==== Object Detection
+
+This demo displays bounding boxes around objects detected by a neural network. To disable the viewfinder, use the xref:camera_software.adoc#nopreview[`-n`] flag. To return purely textual output describing the objects detected, add the `-v 2` option. Run the following command to try the demo on your Raspberry Pi:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov6_inference.json
+----
+
+Alternatively, you can try another model with different trade-offs in performance and efficiency.
+
+To run the demo with the Yolov8 model, run the following command:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_inference.json
+----
+
+To run the demo with the YoloX model, run the following command:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolox_inference.json
+----
+
+To run the demo with the Yolov5 Person and Face model, run the following command:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_personface.json
+----
+
+==== Image Segmentation
+
+This demo performs object detection and segments the object by drawing a colour mask on the viewfinder image. Run the following command to try the demo on your Raspberry Pi:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_segmentation.json --framerate 20
+----
+
+==== Pose Estimation
+
+This demo performs 17-point human pose estimation, drawing lines connecting the detected points. Run the following command to try the demo on your Raspberry Pi:
+
+[source,console]
+----
+$ rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_pose.json
+----
+
+=== Alternative Package Versions
+
+The AI Kit and AI HAT+ do not function if there is a version mismatch between the Hailo software packages and device drivers. In addition, Hailo's neural network tooling may require a particular version for generated model files. If you require a specific version, complete the following steps to install the proper versions of all of the dependencies:
+
+. If you have previously used `apt-mark` to hold any of the relevant packages, you may need to unhold them:
++
+[source,console]
+----
+$ sudo apt-mark unhold hailo-tappas-core hailort hailo-dkms
+----
+
+. Install the required version of the software packages:
+
+[tabs]
+======
+v4.19::
+To install version 4.19 of Hailo's neural network tooling, run the following commands:
++
+[source,console]
+----
+sudo apt install hailo-tappas-core=3.30.0-1 hailort=4.19.0-3 hailo-dkms=4.19.0-1 python3-hailort=4.19.0-2
+----
++
+[source,console]
+----
+$ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms python3-hailort
+----
+
+4.18::
+To install version 4.18 of Hailo's neural network tooling, run the following commands:
++
+[source,console]
+----
+$ sudo apt install hailo-tappas-core=3.29.1 hailort=4.18.0 hailo-dkms=4.18.0-2
+----
++
+[source,console]
+----
+$ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms
+----
+
+4.17::
+To install version 4.17 of Hailo's neural network tooling, run the following commands:
++
+[source,console]
+----
+$ sudo apt install hailo-tappas-core=3.28.2 hailort=4.17.0 hailo-dkms=4.17.0-1
+----
++
+[source,console]
+----
+$ sudo apt-mark hold hailo-tappas-core hailort hailo-dkms
+----
+======
+
+=== Further Resources
+
+Hailo has also created a set of demos that you can run on a Raspberry Pi 5, available in the https://github.com/hailo-ai/hailo-rpi5-examples[hailo-ai/hailo-rpi5-examples GitHub repository].
+
+You can find Hailo's extensive model zoo, which contains a large number of neural networks, in the https://github.com/hailo-ai/hailo_model_zoo/tree/master/docs/public_models/HAILO8L[hailo-ai/hailo_model_zoo GitHub repository].
+
+Check out the https://community.hailo.ai/[Hailo community forums and developer zone] for further discussions on the Hailo hardware and tooling.
diff --git a/documentation/asciidoc/computers/camera/camera_usage.adoc b/documentation/asciidoc/computers/camera/camera_usage.adoc
index 774d74085f..722f37c82b 100644
--- a/documentation/asciidoc/computers/camera/camera_usage.adoc
+++ b/documentation/asciidoc/computers/camera/camera_usage.adoc
@@ -1,13 +1,19 @@
-== Introducing the Raspberry Pi Cameras
+This documentation describes how to use supported camera modules with our software tools. All Raspberry Pi cameras can record high-resolution photographs and full HD 1080p video (or better) with our software tools.
-There are now several official Raspberry Pi camera modules. The original 5-megapixel model was https://www.raspberrypi.com/news/camera-board-available-for-sale/[released] in 2013, it was followed by an 8-megapixel https://www.raspberrypi.com/products/camera-module-v2/[Camera Module 2] which was https://www.raspberrypi.com/news/new-8-megapixel-camera-board-sale-25/[released] in 2016. The latest camera model is the 12-megapixel https://raspberrypi.com/products/camera-module-3/[Camera Module 3] which was https://www.raspberrypi.com/news/new-autofocus-camera-modules/[released] in 2023. The original 5MP device is no longer available from Raspberry Pi.
+Raspberry Pi produces several official camera modules, including:
-Additionally a 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] with CS- or M12-mount variants for use with external lenses was https://www.raspberrypi.com/news/new-product-raspberry-pi-high-quality-camera-on-sale-now-at-50/[released in 2020] and https://www.raspberrypi.com/news/new-autofocus-camera-modules/[2023] respectively. There is no infrared version of the HQ Camera.
+* the original 5-megapixel Camera Module 1 (discontinued)
+* the 8-megapixel https://www.raspberrypi.com/products/camera-module-v2/[Camera Module 2], with or without an infrared filter
+* the 12-megapixel https://raspberrypi.com/products/camera-module-3/[Camera Module 3], with both standard and wide lenses, with or without an infrared filter
+* the 12-megapixel https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/[High Quality Camera] with CS and M12 mount variants for use with external lenses
+* the 1.6-megapixel https://www.raspberrypi.com/products/raspberry-pi-global-shutter-camera/[Global Shutter Camera] for fast motion photography
+* the 12-megapixel https://www.raspberrypi.com/products/ai-camera/[AI Camera] uses the Sony IMX500 imaging sensor to provide low-latency, high-performance AI capabilities to any camera application
-All of these cameras come in visible light and infrared versions, while the Camera Module 3 also comes as a standard or wide FoV model for a total of four different variants.
-
-Further details on the camera modules can be found in the xref:../accessories/camera.adoc#about-the-camera-modules[camera hardware] page.
+For more information about camera hardware, see the xref:../accessories/camera.adoc#about-the-camera-modules[camera hardware documentation].
-All Raspberry Pi cameras are capable of taking high-resolution photographs, along with full HD 1080p video, and can be fully controlled programmatically. This documentation describes how to use the camera in various scenarios, and how to use the various software tools.
+First, xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[install your camera module]. Then, follow the guides in this section to put your camera module to use.
-Once you've xref:../accessories/camera.adoc#installing-a-raspberry-pi-camera[installed your camera module], there are various ways the cameras can be used. The simplest option is to use one of the provided camera applications, such as `rpicam-still` or `rpicam-vid`.
+[WARNING]
+====
+This guide no longer covers the _legacy camera stack_ which was available in Bullseye and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack.
+====
diff --git a/documentation/asciidoc/computers/camera/csi-2-usage.adoc b/documentation/asciidoc/computers/camera/csi-2-usage.adoc
index 269ab1a72e..f3515ae946 100644
--- a/documentation/asciidoc/computers/camera/csi-2-usage.adoc
+++ b/documentation/asciidoc/computers/camera/csi-2-usage.adoc
@@ -1,18 +1,18 @@
-== Camera Serial Interface 2 (CSI2) "Unicam"
+== Unicam
-The SoC's used on the Raspberry Pi range all have two camera interfaces that support either CSI-2 D-PHY 1.1 or CCP2 (Compact Camera Port 2) sources. This interface is known by the codename "Unicam". The first instance of Unicam supports 2 CSI-2 data lanes, whilst the second supports 4. Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500MHz).
+Raspberry Pi SoCs all have two camera interfaces that support either CSI-2 D-PHY 1.1 or Compact Camera Port 2 (CCP2) sources. This interface is known by the codename Unicam. The first instance of Unicam supports two CSI-2 data lanes, while the second supports four. Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500MHz).
-However, the normal variants of the Raspberry Pi only expose the second instance, and route out _only_ 2 of the data lanes to the camera connector. The Compute Module range route out all lanes from both peripherals.
+Compute Modules and Raspberry Pi 5 route out all lanes from both peripherals. Other models prior to Raspberry Pi 5 only expose the second instance, routing out only two of the data lanes to the camera connector.
-=== Software Interfaces
+=== Software interfaces
-The V4L2 software interface is now the only means of communicating with the Unicam peripheral. There used to also be "Firmware" and "MMAL rawcam component" interfaces, but these are no longer supported.
+The V4L2 software interface is the only means of communicating with the Unicam peripheral. There used to also be firmware and MMAL rawcam component interfaces, but these are no longer supported.
==== V4L2
NOTE: The V4L2 interface for Unicam is available only when using `libcamera`.
-There is a fully open source kernel driver available for the Unicam block; this is a kernel module called bcm2835-unicam. This interfaces to V4L2 subdevice drivers for the source to deliver the raw frames. This bcm2835-unicam driver controls the sensor, and configures the CSI-2 receiver so that the peripheral will write the raw frames (after Debayer) to SDRAM for V4L2 to deliver to applications. Except for this ability to unpack the CSI-2 Bayer formats to 16bits/pixel, there is no image processing between the image source (e.g. camera sensor) and bcm2835-unicam placing the image data in SDRAM.
+There is a fully open-source kernel driver available for the Unicam block; this kernel module, called `bcm2835-unicam`, interfaces with V4L2 subdevice drivers to deliver raw frames. This `bcm2835-unicam` driver controls the sensor and configures the Camera Serial Interface 2 (CSI-2) receiver. Peripherals write raw frames (after Debayer) to SDRAM for V4L2 to deliver to applications. There is no image processing between the camera sensor capturing the image and the `bcm2835-unicam` driver placing the image data in SDRAM except for Bayer unpacking to 16bits/pixel.
----
|------------------------|
@@ -33,7 +33,7 @@ ccp2 | |
|-----------------|
----
-Mainline Linux has a range of existing drivers. The Raspberry Pi kernel tree has some additional drivers and device tree overlays to configure them that have all been tested and confirmed to work. They include:
+Mainline Linux contains a range of existing drivers. The Raspberry Pi kernel tree has some additional drivers and Device Tree overlays to configure them:
|===
| Device | Type | Notes
@@ -71,17 +71,17 @@ Mainline Linux has a range of existing drivers. The Raspberry Pi kernel tree has
| Supported by a third party
|===
-As the subdevice driver is also a kernel driver, with a standardised API, 3rd parties are free to write their own for any source of their choosing.
+As the subdevice driver is also a kernel driver with a standardised API, third parties are free to write their own for any source of their choosing.
-=== Developing Third-Party Drivers
+=== Write a third-party driver
This is the recommended approach to interfacing via Unicam.
-When developing a driver for a new device intended to be used with the bcm2835-unicam module, you need the driver and corresponding device tree overlays. Ideally the driver should be submitted to the http://vger.kernel.org/vger-lists.html#linux-media[linux-media] mailing list for code review and merging into mainline, then moved to the https://github.com/raspberrypi/linux[Raspberry Pi kernel tree], but exceptions may be made for the driver to be reviewed and merged directly to the Raspberry Pi kernel.
+When developing a driver for a new device intended to be used with the `bcm2835-unicam` module, you need the driver and corresponding device tree overlays. Ideally, the driver should be submitted to the http://vger.kernel.org/vger-lists.html#linux-media[linux-media] mailing list for code review and merging into mainline, then moved to the https://github.com/raspberrypi/linux[Raspberry Pi kernel tree]; but exceptions may be made for the driver to be reviewed and merged directly to the Raspberry Pi kernel.
-Please note that all kernel drivers are licensed under the GPLv2 licence, therefore source code *MUST* be available. Shipping of binary modules only is a violation of the GPLv2 licence under which the Linux kernel is licensed.
+NOTE: All kernel drivers are licensed under the GPLv2 licence, therefore source code must be available. Shipping of binary modules only is a violation of the GPLv2 licence under which the Linux kernel is licensed.
-The bcm2835-unicam has been written to try and accommodate all types of CSI-2 source driver as are currently found in the mainline Linux kernel. Broadly these can be split into camera sensors and bridge chips. Bridge chips allow for conversion between some other format and CSI-2.
+The `bcm2835-unicam` module has been written to try and accommodate all types of CSI-2 source driver that are currently found in the mainline Linux kernel. These can be split broadly into camera sensors and bridge chips. Bridge chips allow for conversion between some other format and CSI-2.
==== Camera sensors
@@ -91,42 +91,42 @@ The https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/imx219
Sensors generally support https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/control.html[V4L2 user controls]. Not all these controls need to be implemented in a driver. The IMX219 driver only implements a small subset, listed below, the implementation of which is handled by the `imx219_set_ctrl` function.
-* `V4L2_CID_PIXEL_RATE` / `V4L2_CID_VBLANK` / `V4L2_CID_HBLANK`: allows the application to set the frame rate.
-* `V4L2_CID_EXPOSURE`: sets the exposure time in lines. The application needs to use `V4L2_CID_PIXEL_RATE`, `V4L2_CID_HBLANK`, and the frame width to compute the line time.
-* `V4L2_CID_ANALOGUE_GAIN`: analogue gain in sensor specific units.
-* `V4L2_CID_DIGITAL_GAIN`: optional digital gain in sensor specific units.
-* `V4L2_CID_HFLIP / V4L2_CID_VFLIP`: flips the image either horizontally or vertically. Note that this operation may change the Bayer order of the data in the frame, as is the case on the imx219.
-* `V4L2_CID_TEST_PATTERN` / `V4L2_CID_TEST_PATTERN_*`: Enables output of various test patterns from the sensor. Useful for debugging.
+* `V4L2_CID_PIXEL_RATE` / `V4L2_CID_VBLANK` / `V4L2_CID_HBLANK`: allows the application to set the frame rate
+* `V4L2_CID_EXPOSURE`: sets the exposure time in lines; the application needs to use `V4L2_CID_PIXEL_RATE`, `V4L2_CID_HBLANK`, and the frame width to compute the line time
+* `V4L2_CID_ANALOGUE_GAIN`: analogue gain in sensor specific units
+* `V4L2_CID_DIGITAL_GAIN`: optional digital gain in sensor specific units
+* `V4L2_CID_HFLIP / V4L2_CID_VFLIP`: flips the image either horizontally or vertically; this operation may change the Bayer order of the data in the frame, as is the case on the IMX219.
+* `V4L2_CID_TEST_PATTERN` / `V4L2_CID_TEST_PATTERN_*`: enables output of various test patterns from the sensor; useful for debugging
In the case of the IMX219, many of these controls map directly onto register writes to the sensor itself.
-Further guidance can be found in libcamera's https://git.linuxtv.org/libcamera.git/tree/Documentation/sensor_driver_requirements.rst[sensor driver requirements], and also in chapter 3 of the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera Tuning Guide].
+Further guidance can be found in the `libcamera` https://git.linuxtv.org/libcamera.git/tree/Documentation/sensor_driver_requirements.rst[sensor driver requirements], and in chapter 3 of the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide].
===== Device Tree
-Device tree is used to select the sensor driver and configure parameters such as number of CSI-2 lanes, continuous clock lane operation, and link frequency (often only one is supported).
+Device Tree is used to select the sensor driver and configure parameters such as number of CSI-2 lanes, continuous clock lane operation, and link frequency (often only one is supported).
-* The IMX219 https://github.com/raspberrypi/linux/blob/rpi-6.1.y/arch/arm/boot/dts/overlays/imx219-overlay.dts[device tree overlay] for the 6.1 kernel
+The IMX219 https://github.com/raspberrypi/linux/blob/rpi-6.1.y/arch/arm/boot/dts/overlays/imx219-overlay.dts[Device Tree overlay] for the 6.1 kernel is available on GitHub.
==== Bridge chips
These are devices that convert an incoming video stream, for example HDMI or composite, into a CSI-2 stream that can be accepted by the Raspberry Pi CSI-2 receiver.
-Handling bridge chips is more complicated, as unlike camera sensors they have to respond to the incoming signal and report that to the application.
+Handling bridge chips is more complicated. Unlike camera sensors, they have to respond to the incoming signal and report that to the application.
-The mechanisms for handling bridge chips can be broadly split into either analogue or digital.
+The mechanisms for handling bridge chips can be split into two categories: either analogue or digital.
-When using `ioctls` in the sections below, an `_S_` in the `ioctl` name means it is a set function, whilst `_G_` is a get function and `_ENUM` enumerates a set of permitted values.
+When using `ioctls` in the sections below, an `_S_` in the `ioctl` name means it is a set function, while `_G_` is a get function and `_ENUM_` enumerates a set of permitted values.
===== Analogue video sources
-Analogue video sources use the standard `ioctls` for detecting and setting video standards. https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_G_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_S_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enumstd.html[`VIDIOC_ENUMSTD`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-querystd.html[`VIDIOC_QUERYSTD`]
+Analogue video sources use the standard `ioctls` for detecting and setting video standards. https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_G_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-std.html[`VIDIOC_S_STD`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enumstd.html[`VIDIOC_ENUMSTD`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-querystd.html[`VIDIOC_QUERYSTD`] are available.
-Selecting the wrong standard will generally result in corrupt images. Setting the standard will typically also set the resolution on the V4L2 CAPTURE queue. It can not be set via `VIDIOC_S_FMT`. Generally requesting the detected standard via `VIDIOC_QUERYSTD` and then setting it with `VIDIOC_S_STD` before streaming is a good idea.
+Selecting the wrong standard will generally result in corrupt images. Setting the standard will typically also set the resolution on the V4L2 CAPTURE queue. It can not be set via `VIDIOC_S_FMT`. Generally, requesting the detected standard via `VIDIOC_QUERYSTD` and then setting it with `VIDIOC_S_STD` before streaming is a good idea.
===== Digital video sources
-For digital video sources, such as HDMI, there is an alternate set of calls that allow specifying of all the digital timing parameters (https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_G_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_S_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enum-dv-timings.html[`VIDIOC_ENUM_DV_TIMINGS`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-query-dv-timings.html[`VIDIOC_QUERY_DV_TIMINGS`]).
+For digital video sources, such as HDMI, there is an alternate set of calls that allow specifying of all the digital timing parameters: https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_G_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-dv-timings.html[`VIDIOC_S_DV_TIMINGS`], https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-enum-dv-timings.html[`VIDIOC_ENUM_DV_TIMINGS`], and https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-query-dv-timings.html[`VIDIOC_QUERY_DV_TIMINGS`].
As with analogue bridges, the timings typically fix the V4L2 CAPTURE queue resolution, and calling `VIDIOC_S_DV_TIMINGS` with the result of `VIDIOC_QUERY_DV_TIMINGS` before streaming should ensure the format is correct.
@@ -134,42 +134,35 @@ Depending on the bridge chip and the driver, it may be possible for changes in t
===== Currently supported devices
-There are 2 bridge chips that are currently supported by the Raspberry Pi Linux kernel, the Analog Devices ADV728x-M for analogue video sources, and the Toshiba TC358743 for HDMI sources.
+There are two bridge chips which are currently supported by the Raspberry Pi Linux kernel: the Analog Devices ADV728x-M for analogue video sources, and the Toshiba TC358743 for HDMI sources.
-_Analog Devices ADV728x(A)-M Analogue video to CSI2 bridge_
+Analog Devices ADV728x(A)-M analogue video to CSI2 bridge chips convert composite S-video (Y/C), or component (YPrPb) video into a single lane CSI-2 interface, and are supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/adv7180.c[ADV7180 kernel driver].
-These chips convert composite, S-video (Y/C), or component (YPrPb) video into a single lane CSI-2 interface, and are supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/adv7180.c[ADV7180 kernel driver].
-
-Product details for the various versions of this chip can be found on the Analog Devices website.
-
-https://www.analog.com/en/products/adv7280a.html[ADV7280A], https://www.analog.com/en/products/adv7281a.html[ADV7281A], https://www.analog.com/en/products/adv7282a.html[ADV7282A]
+Product details for the various versions of this chip can be found on the Analog Devices website: https://www.analog.com/en/products/adv7280a.html[ADV7280A], https://www.analog.com/en/products/adv7281a.html[ADV7281A], and https://www.analog.com/en/products/adv7282a.html[ADV7282A].
Because of some missing code in the current core V4L2 implementation, selecting the source fails, so the Raspberry Pi kernel version adds a kernel module parameter called `dbg_input` to the ADV7180 kernel driver which sets the input source every time VIDIOC_S_STD is called. At some point mainstream will fix the underlying issue (a disjoin between the kernel API call s_routing, and the userspace call `VIDIOC_S_INPUT`) and this modification will be removed.
-Please note that receiving interlaced video is not supported, therefore the ADV7281(A)-M version of the chip is of limited use as it doesn't have the necessary I2P deinterlacing block. Also ensure when selecting a device to specify the -M option. Without that you will get a parallel output bus which can not be interfaced to the Raspberry Pi.
+Receiving interlaced video is not supported, therefore the ADV7281(A)-M version of the chip is of limited use as it doesn't have the necessary I2P deinterlacing block. Also ensure when selecting a device to specify the -M option. Without that you will get a parallel output bus which can not be interfaced to the Raspberry Pi.
-There are no known commercially available boards using these chips, but this driver has been tested via the Analog Devices https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/EVAL-ADV7282A-M.html[EVAL-ADV7282-M evaluation board]
+There are no known commercially available boards using these chips, but this driver has been tested via the Analog Devices https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/EVAL-ADV7282A-M.html[EVAL-ADV7282-M evaluation board].
-This driver can be loaded using the `config.txt` dtoverlay `adv7282m` if you are using the `ADV7282-M` chip variant; or `adv728x-m` with a parameter of either `adv7280m=1`, `adv7281m=1`, or `adv7281ma=1` if you are using a different variant. e.g.
+This driver can be loaded using the `config.txt` dtoverlay `adv7282m` if you are using the `ADV7282-M` chip variant; or `adv728x-m` with a parameter of either `adv7280m=1`, `adv7281m=1`, or `adv7281ma=1` if you are using a different variant.
----
dtoverlay=adv728x-m,adv7280m=1
----
-_Toshiba TC358743 HDMI to CSI2 bridge_
+The Toshiba TC358743 is an HDMI to CSI-2 bridge chip, capable of converting video data at up to 1080p60.
-This is a HDMI to CSI-2 bridge chip, capable of converting video data at up to 1080p60.
+Information on this bridge chip can be found on the https://toshiba.semicon-storage.com/ap-en/semiconductor/product/interface-bridge-ics-for-mobile-peripheral-devices/hdmir-interface-bridge-ics/detail.TC358743XBG.html[Toshiba website].
-Information on this bridge chip can be found on the https://toshiba.semicon-storage.com/ap-en/semiconductor/product/interface-bridge-ics-for-mobile-peripheral-devices/hdmir-interface-bridge-ics/detail.TC358743XBG.html[Toshiba Website]
+The TC358743 interfaces HDMI into CSI-2 and I2S outputs. It is supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/tc358743.c[TC358743 kernel module].
-The TC358743 interfaces HDMI in to CSI-2 and I2S outputs. It is supported by the https://github.com/raspberrypi/linux/blob/rpi-6.1.y/drivers/media/i2c/tc358743.c[TC358743 kernel module].
+The chip supports incoming HDMI signals as either RGB888, YUV444, or YUV422, at up to 1080p60. It can forward RGB888, or convert it to YUV444 or YUV422, and convert either way between YUV444 and YUV422. Only RGB888 and YUV422 support has been tested. When using two CSI-2 lanes, the maximum rates that can be supported are 1080p30 as RGB888, or 1080p50 as YUV422. When using four lanes on a Compute Module, 1080p60 can be received in either format.
-The chip supports incoming HDMI signals as either RGB888, YUV444, or YUV422, at up to 1080p60. It can forward RGB888, or convert it to YUV444 or YUV422, and convert either way between YUV444 and YUV422. Only RGB888 and YUV422 support has been tested. When using 2 CSI-2 lanes, the maximum rates that can be supported are 1080p30 as RGB888, or 1080p50 as YUV422. When using 4 lanes on a Compute Module, 1080p60 can be received in either format.
+HDMI negotiates the resolution by a receiving device advertising an https://en.wikipedia.org/wiki/Extended_Display_Identification_Data[EDID] of all the modes that it can support. The kernel driver has no knowledge of the resolutions, frame rates, or formats that you wish to receive, so it is up to the user to provide a suitable file via the VIDIOC_S_EDID ioctl, or more easily using `v4l2-ctl --fix-edid-checksums --set-edid=file=filename.txt` (adding the --fix-edid-checksums option means that you don't have to get the checksum values correct in the source file). Generating the required EDID file (a textual hexdump of a binary EDID file) is not too onerous, and there are tools available to generate them, but it is beyond the scope of this page.
-HDMI negotiates the resolution by a receiving device advertising an https://en.wikipedia.org/wiki/Extended_Display_Identification_Data[EDID] of all the modes that it can support. The kernel driver has no knowledge of the resolutions, frame rates, or formats that you wish to receive, therefore it is up to the user to provide a suitable file.
-This is done via the VIDIOC_S_EDID ioctl, or more easily using `v4l2-ctl --fix-edid-checksums --set-edid=file=filename.txt` (adding the --fix-edid-checksums option means that you don't have to get the checksum values correct in the source file). Generating the required EDID file (a textual hexdump of a binary EDID file) is not too onerous, and there are tools available to generate them, but it is beyond the scope of this page.
-
-As described above, use the `DV_TIMINGS` ioctls to configure the driver to match the incoming video. The easiest approach for this is to use the command `v4l2-ctl --set-dv-bt-timings query`. The driver does support generating the SOURCE_CHANGED events should you wish to write an application to handle a changing source. Changing the output pixel format is achieved by setting it via VIDIOC_S_FMT, however only the pixel format field will be updated as the resolution is configured by the dv timings.
+As described above, use the `DV_TIMINGS` ioctls to configure the driver to match the incoming video. The easiest approach for this is to use the command `v4l2-ctl --set-dv-bt-timings query`. The driver does support generating the `SOURCE_CHANGED` events, should you wish to write an application to handle a changing source. Changing the output pixel format is achieved by setting it via `VIDIOC_S_FMT`, but only the pixel format field will be updated as the resolution is configured by the DV timings.
There are a couple of commercially available boards that connect this chip to the Raspberry Pi. The Auvidea B101 and B102 are the most widely obtainable, but other equivalent boards are available.
@@ -203,4 +196,5 @@ The chip also supports capturing stereo HDMI audio via I2S. The Auvidea boards b
|===
The `tc358743-audio` overlay is required _in addition to_ the `tc358743` overlay. This should create an ALSA recording device for the HDMI audio.
-Please note that there is no resampling of the audio. The presence of audio is reflected in the V4L2 control TC358743_CID_AUDIO_PRESENT / "audio-present", and the sample rate of the incoming audio is reflected in the V4L2 control TC358743_CID_AUDIO_SAMPLING_RATE / "Audio sampling-frequency". Recording when no audio is present will generate warnings, as will recording at a sample rate different from that reported.
+
+There is no resampling of the audio. The presence of audio is reflected in the V4L2 control `TC358743_CID_AUDIO_PRESENT` (audio-present), and the sample rate of the incoming audio is reflected in the V4L2 control `TC358743_CID_AUDIO_SAMPLING_RATE` (audio sampling-frequency). Recording when no audio is present or at a sample rate different from that reported emits a warning.
diff --git a/documentation/asciidoc/computers/camera/gstreamer.adoc b/documentation/asciidoc/computers/camera/gstreamer.adoc
deleted file mode 100644
index 39d9e1ba6a..0000000000
--- a/documentation/asciidoc/computers/camera/gstreamer.adoc
+++ /dev/null
@@ -1,58 +0,0 @@
-=== Using Gstreamer
-
-_Gstreamer_ is a Linux framework for reading, processing and playing multimedia files. There is a lot of information and many tutorials at the https://gstreamer.freedesktop.org/[_gstreamer_ website]. Here we show how `rpicam-vid` can be used to stream video over a network.
-
-On the server we need `rpicam-vid` to output an encoded h.264 bitstream to _stdout_ and can use the _gstreamer_ `fdsrc` element to receive it. Then extra _gstreamer_ elements can send this over the network. As an example we can simply send and receive the stream on the same device over a UDP link. On the server:
-
-[,bash]
-----
-rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! udpsink host=localhost port=5000
-----
-
-For the client (type this into another console window) we can use:
-
-[,bash]
-----
-gst-launch-1.0 udpsrc address=localhost port=5000 ! h264parse ! v4l2h264dec ! autovideosink
-----
-
-==== Using RTP
-
-To stream using the RTP protocol, on the server you could use:
-
-[,bash]
-----
-rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host=localhost port=5000
-----
-
-And in the client window:
-
-[,bash]
-----
-gst-launch-1.0 udpsrc address=localhost port=5000 caps=application/x-rtp ! rtph264depay ! h264parse ! v4l2h264dec ! autovideosink
-----
-
-We conclude with an example that streams from one machine to another. Let us assume that the client machine has the IP address `192.168.0.3`. On the server (a Raspberry Pi) the pipeline is identical, but for the destination address:
-
-[,bash]
-----
-rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host=192.168.0.3 port=5000
-----
-
-If the client is not a Raspberry Pi it may have different _gstreamer_ elements available. For a Linux PC we might use:
-
-[,bash]
-----
-gst-launch-1.0 udpsrc address=192.168.0.3 port=5000 caps=application/x-rtp ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink
-----
-
-==== The `libcamerasrc` element
-
-`libcamera` provides a `libcamerasrc` _gstreamer_ element which can be used directly instead of `rpicam-vid`. On the server you could use:
-
-[,bash]
-----
-gst-launch-1.0 libcamerasrc ! capsfilter caps=video/x-raw,width=1280,height=720,format=NV12 ! v4l2convert ! v4l2h264enc extra-controls="controls,repeat_sequence_header=1" ! h264parse ! rtph264pay ! udpsink host=localhost port=5000
-----
-
-and on the client we use the same playback pipeline as previously.
diff --git a/documentation/asciidoc/computers/camera/images/cam.jpg b/documentation/asciidoc/computers/camera/images/cam.jpg
deleted file mode 100644
index 38963884d2..0000000000
Binary files a/documentation/asciidoc/computers/camera/images/cam.jpg and /dev/null differ
diff --git a/documentation/asciidoc/computers/camera/images/cam2.jpg b/documentation/asciidoc/computers/camera/images/cam2.jpg
deleted file mode 100644
index 01d39ca9c1..0000000000
Binary files a/documentation/asciidoc/computers/camera/images/cam2.jpg and /dev/null differ
diff --git a/documentation/asciidoc/computers/os/images/image2.jpg b/documentation/asciidoc/computers/camera/images/webcam-image-high-resolution.jpg
similarity index 100%
rename from documentation/asciidoc/computers/os/images/image2.jpg
rename to documentation/asciidoc/computers/camera/images/webcam-image-high-resolution.jpg
diff --git a/documentation/asciidoc/computers/os/images/image3.jpg b/documentation/asciidoc/computers/camera/images/webcam-image-no-banner.jpg
similarity index 100%
rename from documentation/asciidoc/computers/os/images/image3.jpg
rename to documentation/asciidoc/computers/camera/images/webcam-image-no-banner.jpg
diff --git a/documentation/asciidoc/computers/os/images/image.jpg b/documentation/asciidoc/computers/camera/images/webcam-image.jpg
similarity index 100%
rename from documentation/asciidoc/computers/os/images/image.jpg
rename to documentation/asciidoc/computers/camera/images/webcam-image.jpg
diff --git a/documentation/asciidoc/computers/camera/libcamera_3rd_party_tuning.adoc b/documentation/asciidoc/computers/camera/libcamera_3rd_party_tuning.adoc
deleted file mode 100644
index a20bd82bda..0000000000
--- a/documentation/asciidoc/computers/camera/libcamera_3rd_party_tuning.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-=== Camera Tuning and supporting 3rd Party Sensors
-
-==== The Camera Tuning File
-
-Most of the image processing applied to frames from the sensor is done by the hardware ISP (Image Signal Processor). This processing is governed by a set of _control algorithms_ and these in turn must have a wide range of parameters supplied to them. These parameters are tuned specifically for each sensor and are collected together in a JSON file known as the _camera tuning file_.
-
-This _tuning file_ can be inspected and edited by users. Using the `--tuning-file` command line option, users can point the system at completely custom camera tuning files.
-
-==== 3rd Party Sensors
-
-`libcamera` makes it possible to support 3rd party sensors (that is, sensors other than Raspberry Pi's officially supported sensors) on the Raspberry Pi platform. To accomplish this, a working open source sensor driver must be provided, which the authors are happy to submit to the Linux kernel. There are a couple of extra files need to be added to `libcamera` which supply device-specific information that is available from the kernel drivers, including the previously discussed camera tuning file.
-
-Raspberry Pi also supplies a _tuning tool_ which automates the generation of the tuning file from a few simple calibration images.
-
-Both these topics are rather beyond the scope of the documentation here, however, full information is available in the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning Guide for the Raspberry Pi cameras and libcamera].
diff --git a/documentation/asciidoc/computers/camera/libcamera_differences.adoc b/documentation/asciidoc/computers/camera/libcamera_differences.adoc
index da96139abc..1205db97eb 100644
--- a/documentation/asciidoc/computers/camera/libcamera_differences.adoc
+++ b/documentation/asciidoc/computers/camera/libcamera_differences.adoc
@@ -1,42 +1,42 @@
-=== Differences compared to _Raspicam_ Apps
+=== Differences between `rpicam` and `raspicam`
-Whilst the `rpicam-apps` attempt to emulate most features of the legacy _Raspicam_ applications, there are some differences. Here we list the principal ones that users are likely to notice.
+The `rpicam-apps` emulate most features of the legacy `raspicam` applications. However, users may notice the following differences:
-* The use of Boost `program_options` doesn't allow multi-character short versions of options, so where these were present they have had to be dropped. The long form options are named the same, and any single character short forms are preserved.
+* Boost `program_options` don't allow multi-character short versions of options, so where these were present they have had to be dropped. The long form options are named the same way, and any single-character short forms are preserved.
-* `rpicam-still` and `rpicam-jpeg` do not show the capture image in the preview window.
+* `rpicam-still` and `rpicam-jpeg` do not show the captured image in the preview window.
-* `libcamera` performs its own camera mode selection, so the `--mode` option is not supported. It deduces camera modes from the resolutions requested. There is still work ongoing in this area.
+* `rpicam-apps` removed the following `raspicam` features:
++
+** opacity (`--opacity`)
+** image effects (`--imxfx`)
+** colour effects (`--colfx`)
+** annotation (`--annotate`, `--annotateex`)
+** dynamic range compression, or DRC (`--drc`)
+** stereo (`--stereo`, `--decimate` and `--3dswap`)
+** image stabilisation (`--vstab`)
+** demo modes (`--demo`)
++
+xref:camera_software.adoc#post-processing-with-rpicam-apps[Post-processing] replaced many of these features.
-* The following features of the legacy apps are not supported as the code has to run on the ARM now. But note that a number of these effects are now provided by the xref:camera_software.adoc#post-processing[post-processing] mechanism.
- - opacity (`--opacity`)
- - image effects (`--imxfx`)
- - colour effects (`--colfx`)
- - annotation (`--annotate`, `--annotateex`)
- - dynamic range compression, or DRC (`--drc`)
+* `rpicam-apps` removed xref:camera_software.adoc#rotation[`rotation`] option support for 90° and 270° rotations.
-* stereo (`--stereo`, `--decimate` and `--3dswap`). There is no support in `libcamera` for stereo currently.
+* `raspicam` conflated metering and exposure; `rpicam-apps` separates these options.
+* To disable Auto White Balance (AWB) in `rpicam-apps`, set a pair of colour gains with xref:camera_software.adoc#awbgains[`awbgains`] (e.g. `1.0,1.0`).
-* There is no image stabilisation (`--vstab`) (though the legacy implementation does not appear to do very much).
+* `rpicam-apps` cannot set Auto White Balance (AWB) into greyworld mode for NoIR camera modules. Instead, pass the xref:camera_software.adoc#tuning-file[`tuning-file`] option a NoIR-specific tuning file like `imx219_noir.json`.
-* There are no demo modes (`--demo`).
+* `rpicam-apps` does not provide explicit control of digital gain. Instead, the xref:camera_software.adoc#gain[`gain`] option sets it implicitly.
-* The transformations supported are those that do not involve a transposition. 180 degree rotations, therefore, are among those permitted but 90 and 270 degree rotations are not.
+* `rpicam-apps` removed the `--ISO` option. Instead, calculate the gain corresponding to the ISO value required. Vendors can provide mappings of gain to ISO.
-* There are some differences in the metering, exposure and AWB options. In particular the legacy apps conflate metering (by which we mean the "metering mode") and the exposure (by which we now mean the "exposure profile"). With regards to AWB, to turn it off you have to set a pair of colour gains (e.g. `--awbgains 1.0,1.0`).
+* `rpicam-apps` does not support setting a flicker period.
-* `libcamera` has no mechanism to set the AWB into "grey world" mode, which is useful for "NOIR" camera modules. However, tuning files are supplied which switch the AWB into the correct mode, so for example, you could use `rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/imx219_noir.json` (for Pi 4 and earlier devices) or `rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json` (Pi 5 and later devices).
-
-* There is support for setting the exposure time (`--shutter`) and analogue gain (`--analoggain` or just `--gain`). There is no explicit control of the digital gain; you get this if the gain requested is larger than the analogue gain can deliver by itself.
-
-* libcamera has no understanding of ISO, so there is no `--ISO` option. Users should calculate the gain corresponding to the ISO value required (usually a manufacturer will tell you that, for example, a gain of 1 corresponds to an ISO of 40), and use the `--gain` parameter instead.
-
-* There is no support for setting the flicker period yet.
-
-* `rpicam-still` does not support burst capture. In fact, because the JPEG encoding is not multi-threaded and pipelined it would produce quite poor framerates. Instead, users are advised to consider using `rpicam-vid` in MJPEG mode instead (and `--segment 1` can be used to force each frame into a separate JPEG file).
-
-* `libcamera` uses open source drivers for all the image sensors, so the mechanism for enabling or disabling on-sensor DPC (Defective Pixel Correction) is different. The imx477 (HQ cam) driver enables on-sensor DPC by default; to disable it the user should, as root, enter
+* `rpicam-still` does not support burst capture. Instead, consider using `rpicam-vid` in MJPEG mode with `--segment 1` to force each frame into a separate file.
+* `rpicam-apps` uses open source drivers for all image sensors, so the mechanism for enabling or disabling on-sensor Defective Pixel Correction (DPC) is different. The imx477 driver on the Raspberry Pi HQ Camera enables on-sensor DPC by default. To disable on-sensor DPC on the HQ Camera, run the following command:
++
+[source,console]
----
-echo 0 > /sys/module/imx477/parameters/dpc_enable
+$ sudo echo 0 > /sys/module/imx477/parameters/dpc_enable
----
diff --git a/documentation/asciidoc/computers/camera/libcamera_known_issues.adoc b/documentation/asciidoc/computers/camera/libcamera_known_issues.adoc
deleted file mode 100644
index e19223d5f5..0000000000
--- a/documentation/asciidoc/computers/camera/libcamera_known_issues.adoc
+++ /dev/null
@@ -1,7 +0,0 @@
-=== Known Issues
-
-We are aware of the following issues in `libcamera` and `rpicam-apps`.
-
-* On Raspberry Pi 3 (and earlier devices) the graphics hardware can only support images up to 2048x2048 pixels which places a limit on the camera images that can be resized into the preview window. In practice this means that video encoding of images larger than 2048 pixels across (which would necessarily be using a codec other than h.264) will not support, or will produce corrupted, preview images. For Raspberry Pi 4 the limit is 4096 pixels. We would recommend using the `-n` (no preview) option for the time being.
-
-* The preview window shows some display tearing when using a desktop environment. This is not likely to be fixable.
diff --git a/documentation/asciidoc/computers/camera/libcamera_python.adoc b/documentation/asciidoc/computers/camera/libcamera_python.adoc
index b2dc7fad7e..d14a170684 100644
--- a/documentation/asciidoc/computers/camera/libcamera_python.adoc
+++ b/documentation/asciidoc/computers/camera/libcamera_python.adoc
@@ -1,47 +1,26 @@
-=== Python Bindings for `libcamera`
+[[picamera2]]
+=== Use `libcamera` from Python with Picamera2
-The https://github.com/raspberrypi/picamera2[Picamera2 library] is a rpicam-based replacement for Picamera, which was a Python interface to Raspberry Pi's legacy camera stack. Picamera2 presents an easy to use Python API.
+The https://github.com/raspberrypi/picamera2[Picamera2 library] is a `rpicam`-based replacement for Picamera, which was a Python interface to Raspberry Pi's legacy camera stack. Picamera2 presents an easy-to-use Python API.
-Documentation about Picamera2 is available https://github.com/raspberrypi/picamera2[on Github] and in the https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 Manual].
+Documentation about Picamera2 is available https://github.com/raspberrypi/picamera2[on GitHub] and in the https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf[Picamera2 manual].
==== Installation
-Picamera2 is only supported on Raspberry Pi OS Bullseye (or later) images, both 32- and 64-bit.
+Recent Raspberry Pi OS images include Picamera2 with all the GUI (Qt and OpenGL) dependencies. Recent Raspberry Pi OS Lite images include Picamera2 without the GUI dependencies, although preview images can still be displayed using DRM/KMS.
-NOTE: As of September 2022, Picamera2 is pre-installed on images downloaded from Raspberry Pi. It works on all Raspberry Pi boards right down to the Pi Zero, although performance in some areas may be worse on less powerful devices.
-
-Picamera2 is not supported on:
-
-. Images based on Buster or earlier releases.
-. Bullseye images where the legacy camera stack has been re-enabled.
-
-On Raspberry Pi OS images, Picamera2 is now installed with all the GUI (Qt and OpenGL) dependencies. On Raspberry Pi OS Lite, it is installed without the GUI dependencies, although preview images can still be displayed using DRM/KMS. If these users wish to use the additional GUI features, they will need to run
-
-----
-$ sudo apt install -y python3-pyqt5 python3-opengl
-----
-
-NOTE: No changes are required to Picamera2 itself.
-
-If your image did not come pre-installed with Picamera2 `apt` is the recommended way of installing and updating Picamera2.
-
-----
-$ sudo apt update
-$ sudo apt upgrade
-----
-
-Thereafter, you can install Picamera2 with all the GUI (Qt and OpenGL) dependencies using
+If your image did not include Picamera2, run the following command to install Picamera2 with all of the GUI dependencies:
+[source,console]
----
$ sudo apt install -y python3-picamera2
----
-If you do not want the GUI dependencies, use
+If you don't want the GUI dependencies, you can run the following command to install Picamera2 without the GUI dependencies:
+[source,console]
----
$ sudo apt install -y python3-picamera2 --no-install-recommends
----
-NOTE: If you have installed Picamera2 previously using `pip`, then you should also uninstall this, using the command `pip3 uninstall picamera2`.
-
-NOTE: If Picamera2 is already installed, you can update it with `sudo apt install -y python3-picamera2`, or as part of a full system update (for example, `sudo apt upgrade`).
+NOTE: If you previously installed Picamera2 with `pip`, uninstall it with: `pip3 uninstall picamera2`.
diff --git a/documentation/asciidoc/computers/camera/qt.adoc b/documentation/asciidoc/computers/camera/qt.adoc
index aeb31b9b6c..66aa9bb9e0 100644
--- a/documentation/asciidoc/computers/camera/qt.adoc
+++ b/documentation/asciidoc/computers/camera/qt.adoc
@@ -1,15 +1,16 @@
-=== Using _libcamera_ and _Qt_ together
+=== Use `libcamera` with Qt
-_Qt_ is a popular application framework and GUI toolkit, and indeed _rpicam-apps_ optionally makes use of it to implement a camera preview window.
+Qt is a popular application framework and GUI toolkit. `rpicam-apps` includes an option to use Qt for a camera preview window.
-However, _Qt_ defines certain symbols as macros in the global namespace (such as `slot` and `emit`) and this causes errors when including _libcamera_ files. The problem is common to all platforms trying to use both _Qt_ and _libcamera_ and not specific to Raspberry Pi. Nonetheless we suggest that developers experiencing difficulties try the following workarounds.
+Unfortunately, Qt defines certain symbols (such as `slot` and `emit`) as macros in the global namespace. This causes errors when including `libcamera` files. The problem is common to all platforms that use both Qt and `libcamera`. Try the following workarounds to avoid these errors:
-1. _libcamera_ include files, or files that include _libcamera_ files (such as _rpicam-apps_ files), should be listed before any _Qt_ header files where possible.
+* List `libcamera` include files, or files that include `libcamera` files (such as `rpicam-apps` files), _before_ any Qt header files whenever possible.
-2. If you do need to mix your Qt application files with libcamera includes, replace `signals:` with `Q_SIGNALS:`, `slots:` with `Q_SLOTS:`, `emit` with `Q_EMIT` and `foreach` with `Q_FOREACH`.
+* If you do need to mix your Qt application files with `libcamera` includes, replace `signals:` with `Q_SIGNALS:`, `slots:` with `Q_SLOTS:`, `emit` with `Q_EMIT` and `foreach` with `Q_FOREACH`.
-3. Before any _libcamera_ include files, add
+* Add the following at the top of any `libcamera` include files:
+
+[source,cpp]
----
#undef signals
#undef slots
@@ -17,6 +18,5 @@ However, _Qt_ defines certain symbols as macros in the global namespace (such as
#undef foreach
----
-4. If you are using _qmake_, add `CONFIG += no_keywords` to the project file. If using _cmake_, add `SET(QT_NO_KEYWORDS ON)`.
-
-We are not aware of any plans for the underlying library problems to be addressed.
+* If your project uses `qmake`, add `CONFIG += no_keywords` to the project file.
+* If your project uses `cmake`, add `SET(QT_NO_KEYWORDS ON)`.
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_building.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_building.adoc
index c5c7ebe69b..306e9cfb84 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_building.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_building.adoc
@@ -1,174 +1,293 @@
-=== Building `libcamera` and `rpicam-apps`
+== Advanced `rpicam-apps`
-Building `libcamera` and `rpicam-apps` for yourself can bring the following benefits.
+=== Build `libcamera` and `rpicam-apps`
+
+Build `libcamera` and `rpicam-apps` for yourself for the following benefits:
* You can pick up the latest enhancements and features.
* `rpicam-apps` can be compiled with extra optimisation for Raspberry Pi 3 and Raspberry Pi 4 devices running a 32-bit OS.
-* You can include the various optional OpenCV and/or TFLite post-processing stages (or add your own).
+* You can include optional OpenCV and/or TFLite post-processing stages, or add your own.
+
+* You can customise or add your own applications derived from `rpicam-apps`
+
+==== Remove pre-installed `rpicam-apps`
-* You can customise or add your own applications derived from `rpicam-apps`.
+Raspberry Pi OS includes a pre-installed copy of `rpicam-apps`. Before building and installing your own version of `rpicam-apps`, you must first remove the pre-installed version. Run the following command to remove the `rpicam-apps` package from your Raspberry Pi:
-NOTE: When building on a Raspberry Pi with 1GB or less of RAM, there is a risk that the device may run out of swap and fail. We recommend either increasing the amount of swap, or building with fewer threads (the `-j` option to `ninja` and to `make`).
+[source,console]
+----
+$ sudo apt remove --purge rpicam-apps
+----
-==== Building `rpicam-apps` without rebuilding `libcamera`
+==== Building `rpicam-apps` without building `libcamera`
-You can rebuild `rpicam-apps` _without_ first rebuilding the whole of `libcamera` and `libepoxy`. If you do not need support for the GLES/EGL preview window then `libepoxy` can be omitted entirely. Mostly this will include Raspberry Pi OS Lite users, and they must be sure to use `-Denable_egl=false` when running `meson setup` later. These users should run:
+To build `rpicam-apps` without first rebuilding `libcamera` and `libepoxy`, install `libcamera`, `libepoxy` and their dependencies with `apt`:
+[source,console]
----
-sudo apt install -y libcamera-dev libjpeg-dev libtiff5-dev
+$ sudo apt install -y libcamera-dev libepoxy-dev libjpeg-dev libtiff5-dev libpng-dev
----
-All other users should execute:
+TIP: If you do not need support for the GLES/EGL preview window, omit `libepoxy-dev`.
+To use the Qt preview window, install the following additional dependencies:
+
+[source,console]
----
-sudo apt install -y libcamera-dev libepoxy-dev libjpeg-dev libtiff5-dev
+$ sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
----
-If you want to use the Qt preview window, please also execute
+For xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav`] support in `rpicam-vid`, install the following additional dependencies:
+[source,console]
----
-sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
+$ sudo apt install libavcodec-dev libavdevice-dev libavformat-dev libswresample-dev
----
-If you want xref:camera_software.adoc#libav-integration-with-rpicam-vid[libav] support in `rpicam-vid`, additional libraries must be installed:
+If you run Raspberry Pi OS Lite, install `git`:
+[source,console]
----
-sudo apt install libavcodec-dev libavdevice-dev libavformat-dev libswresample-dev
+$ sudo apt install -y git
----
-Now proceed directly to the instructions for xref:camera_software.adoc#building-rpicam-apps[building `rpicam-apps`]. Raspberry Pi OS Lite users should check that _git_ is installed first (`sudo apt install -y git`).
+Next, xref:camera_software.adoc#building-rpicam-apps[build `rpicam-apps`].
==== Building `libcamera`
-Rebuilding `libcamera` from scratch should be necessary only if you need the latest features that may not yet have reached the `apt` repositories, or if you need to customise its behaviour in some way.
+NOTE: Only build `libcamera` from scratch if you need custom behaviour or the latest features that have not yet reached `apt` repositories.
+
+[NOTE]
+======
+If you run Raspberry Pi OS Lite, begin by installing the following packages:
-First install all the necessary dependencies for `libcamera`.
+[source,console]
+----
+$ sudo apt install -y python3-pip git python3-jinja2
+----
+======
-NOTE: Raspberry Pi OS Lite users will first need to install the following additional packages if they have not done so previously:
+First, install the following `libcamera` dependencies:
+[source,console]
----
-sudo apt install -y python3-pip git python3-jinja2
+$ sudo apt install -y libboost-dev
+$ sudo apt install -y libgnutls28-dev openssl libtiff5-dev pybind11-dev
+$ sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
+$ sudo apt install -y meson cmake
+$ sudo apt install -y python3-yaml python3-ply
+$ sudo apt install -y libglib2.0-dev libgstreamer-plugins-base1.0-dev
----
-All users should then install the following:
+Now we're ready to build `libcamera` itself.
+Download a local copy of Raspberry Pi's fork of `libcamera` from GitHub:
+
+[source,console]
----
-sudo apt install -y libboost-dev
-sudo apt install -y libgnutls28-dev openssl libtiff5-dev pybind11-dev
-sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
-sudo apt install -y meson cmake
-sudo apt install -y python3-yaml python3-ply
+$ git clone https://github.com/raspberrypi/libcamera.git
----
-In the `meson` commands below we have enabled the _gstreamer_ plugin. If you _do not_ need this you can set `-Dgstreamer=disabled` instead and the next pair of dependencies will not be required. But if you do leave _gstreamer_ enabled, then you will need the following:
+Navigate into the root directory of the repository:
+[source,console]
----
-sudo apt install -y libglib2.0-dev libgstreamer-plugins-base1.0-dev
+$ cd libcamera
----
-Now we can check out and build `libcamera` itself. We check out Raspberry Pi's fork of libcamera which tracks the official repository but lets us control exactly when we pick up new features.
+Next, run `meson` to configure the build environment:
+[source,console]
----
-cd
-git clone https://github.com/raspberrypi/libcamera.git
-cd libcamera
+$ meson setup build --buildtype=release -Dpipelines=rpi/vc4,rpi/pisp -Dipas=rpi/vc4,rpi/pisp -Dv4l2=true -Dgstreamer=enabled -Dtest=false -Dlc-compliance=disabled -Dcam=disabled -Dqcam=disabled -Ddocumentation=disabled -Dpycamera=enabled
----
-Next, please run
+NOTE: You can disable the `gstreamer` plugin by replacing `-Dgstreamer=enabled` with `-Dgstreamer=disabled` during the `meson` build configuration. If you disable `gstreamer`, there is no need to install the `libglib2.0-dev` and `libgstreamer-plugins-base1.0-dev` dependencies.
+
+Now, you can build `libcamera` with `ninja`:
+[source,console]
----
-meson setup build --buildtype=release -Dpipelines=rpi/vc4,rpi/pisp -Dipas=rpi/vc4,rpi/pisp -Dv4l2=true -Dgstreamer=enabled -Dtest=false -Dlc-compliance=disabled -Dcam=disabled -Dqcam=disabled -Ddocumentation=disabled -Dpycamera=enabled
+$ ninja -C build
----
-To complete the `libcamera` build, use
+Finally, run the following command to install your freshly-built `libcamera` binary:
+[source,console]
----
-ninja -C build # use -j 2 on Raspberry Pi 3 or earlier devices
-sudo ninja -C build install
+$ sudo ninja -C build install
----
-NOTE: At the time of writing `libcamera` does not yet have a stable binary interface. Therefore, if you have rebuilt `libcamera` we recommend continuing and rebuilding `rpicam-apps` from scratch too.
+TIP: On devices with 1GB of memory or less, the build may exceed available memory. Append the `-j 1` flag to `ninja` commands to limit the build to a single process. This should prevent the build from exceeding available memory on devices like the Raspberry Pi Zero and the Raspberry Pi 3.
-==== Building `libepoxy`
+`libcamera` does not yet have a stable binary interface. Always build `rpicam-apps` after you build `libcamera`.
-Rebuilding `libepoxy` should not normally be necessary as this library changes only very rarely. If you do want to build it from scratch, however, please follow the instructions below.
+==== Building `rpicam-apps`
-Start by installing the necessary dependencies.
+First fetch the necessary dependencies for `rpicam-apps`.
+[source,console]
----
-sudo apt install -y libegl1-mesa-dev
+$ sudo apt install -y cmake libboost-program-options-dev libdrm-dev libexif-dev
+$ sudo apt install -y meson ninja-build
----
-Next, check out and build `libepoxy`.
+Download a local copy of Raspberry Pi's `rpicam-apps` GitHub repository:
+[source,console]
----
-cd
-git clone https://github.com/anholt/libepoxy.git
-cd libepoxy
-mkdir _build
-cd _build
-meson
-ninja
-sudo ninja install
+$ git clone https://github.com/raspberrypi/rpicam-apps.git
----
-==== Building `rpicam-apps`
+Navigate into the root directory of the repository:
-First fetch the necessary dependencies for `rpicam-apps`.
+[source,console]
+----
+$ cd rpicam-apps
+----
+For desktop-based operating systems like Raspberry Pi OS, configure the `rpicam-apps` build with the following `meson` command:
+
+[source,console]
----
-sudo apt install -y cmake libboost-program-options-dev libdrm-dev libexif-dev
-sudo apt install -y meson ninja-build
+$ meson setup build -Denable_libav=enabled -Denable_drm=enabled -Denable_egl=enabled -Denable_qt=enabled -Denable_opencv=disabled -Denable_tflite=disabled -Denable_hailo=disabled
----
-The `rpicam-apps` build process begins with the following:
+For headless operating systems like Raspberry Pi OS Lite, configure the `rpicam-apps` build with the following `meson` command:
+[source,console]
----
-cd
-git clone https://github.com/raspberrypi/rpicam-apps.git
-cd rpicam-apps
+$ meson setup build -Denable_libav=disabled -Denable_drm=enabled -Denable_egl=disabled -Denable_qt=disabled -Denable_opencv=disabled -Denable_tflite=disabled -Denable_hailo=disabled
----
-At this point you will need to run `meson setup` after deciding what extra flags to pass it. The valid flags are:
+[TIP]
+======
-* `-Dneon_flags=armv8-neon` - you may supply this when building for Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS. Some post-processing features may run more quickly.
+* Use `-Dneon_flags=armv8-neon` to enable optimisations for 32-bit OSes on Raspberry Pi 3 or Raspberry Pi 4.
+* Use `-Denable_opencv=enabled` if you have installed OpenCV and wish to use OpenCV-based post-processing stages.
+* Use `-Denable_tflite=enabled` if you have installed TensorFlow Lite and wish to use it in post-processing stages.
+* Use `-Denable_hailo=enabled` if you have installed HailoRT and wish to use it in post-processing stages.
-* `-Denable_libav=true` or `-Denable_libav=false` - this enables or disables the libav encoder integration.
+======
-* `-Denable_drm=true` or `-Denable_drm=false` - this enables or disables the DRM/KMS preview rendering. This is what implements the preview window when a desktop environment is not running.
+You can now build `rpicam-apps` with the following command:
-* `-Denable_egl=true` or `-Denable_egl=false` - this enables or disables the desktop environment-based preview. You should disable this if your system does not have a desktop environment installed.
+[source,console]
+----
+$ meson compile -C build
+----
-* `-Denable_qt=true` or `-Denable_qt=false` - this enables or disables support for the Qt-based implementation of the preview window. You should disable it if you do not have a desktop environment installed, or if you have no intention of using the Qt-based preview window. The Qt-based preview is normally not recommended because it is computationally very expensive, however it does work with X display forwarding.
+TIP: On devices with 1GB of memory or less, the build may exceed available memory. Append the `-j 1` flag to `meson` commands to limit the build to a single process. This should prevent the build from exceeding available memory on devices like the Raspberry Pi Zero and the Raspberry Pi 3.
-* `-Denable_opencv=true` or `-Denable_opencv=false` - you may choose one of these to force OpenCV-based post-processing stages to be linked (or not). If you enable them, then OpenCV must be installed on your system. Normally they will be built by default if OpenCV is available.
+Finally, run the following command to install your freshly-built `rpicam-apps` binary:
-* `-Denable_tflite=true` or `-Denable_tflite=false` - choose one of these to enable TensorFlow Lite post-processing stages (or not). By default they will not be enabled. If you enable them then TensorFlow Lite must be available on your system. Depending on how you have built and/or installed TFLite, you may need to tweak the `meson.build` file in the `post_processing_stages` directory.
+[source,console]
+----
+$ sudo meson install -C build
+----
-For Raspberry Pi OS users we recommend the following `meson setup` command:
+[TIP]
+====
+The command above should automatically update the `ldconfig` cache. If you have trouble accessing your new `rpicam-apps` build, run the following command to update the cache:
+[source,console]
----
-meson setup build -Denable_libav=true -Denable_drm=true -Denable_egl=true -Denable_qt=true -Denable_opencv=false -Denable_tflite=false
+$ sudo ldconfig
----
+====
-and for Raspberry Pi OS Lite users:
+Run the following command to check that your device uses the new binary:
+[source,console]
----
-meson setup build -Denable_libav=false -Denable_drm=true -Denable_egl=false -Denable_qt=false -Denable_opencv=false -Denable_tflite=false
+$ rpicam-still --version
----
-In both cases, consider `-Dneon_flags=armv8-neon` if you are using a 32-bit OS on a Raspberry Pi 3 or Raspberry Pi 4. Consider `-Denable_opencv=true` if you have installed _OpenCV_ and wish to use OpenCV-based post-processing stages. Finally also consider `-Denable_tflite=true` if you have installed _TensorFlow Lite_ and wish to use it in post-processing stages.
+The output should include the date and time of your local `rpicam-apps` build.
+
+Finally, follow the `dtoverlay` and display driver instructions in the xref:camera_software.adoc#configuration[Configuration section].
+
+==== `rpicam-apps` meson flag reference
+
+The `meson` build configuration for `rpicam-apps` supports the following flags:
+
+`-Dneon_flags=armv8-neon`:: Speeds up certain post-processing features on Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS.
+
+`-Denable_libav=enabled`:: Enables or disables `libav` encoder integration.
+
+`-Denable_drm=enabled`:: Enables or disables **DRM/KMS preview rendering**, a preview window used in the absence of a desktop environment.
+
+`-Denable_egl=enabled`:: Enables or disables the non-Qt desktop environment-based preview. Disable if your system lacks a desktop environment.
+
+`-Denable_qt=enabled`:: Enables or disables support for the Qt-based implementation of the preview window. Disable if you do not have a desktop environment installed or if you have no intention of using the Qt-based preview window. The Qt-based preview is normally not recommended because it is computationally very expensive, however it does work with X display forwarding.
+
+`-Denable_opencv=enabled`:: Forces OpenCV-based post-processing stages to link or not link. Requires OpenCV to enable. Defaults to `disabled`.
-After executing the `meson setup` command of your choice, the whole process concludes with the following:
+`-Denable_tflite=enabled`:: Enables or disables TensorFlow Lite post-processing stages. Disabled by default. Requires Tensorflow Lite to enable. Depending on how you have built and/or installed TFLite, you may need to tweak the `meson.build` file in the `post_processing_stages` directory.
+`-Denable_hailo=enabled`:: Enables or disables HailoRT-based post-processing stages. Requires HailoRT to enable. Defaults to `auto`.
+
+`-Ddownload_hailo_models=true`:: Downloads and installs models for HailoRT post-processing stages. Requires `wget` to be installed. Defaults to `true`.
+
+
+Each of the above options (except for `neon_flags`) supports the following values:
+
+* `enabled`: enables the option, fails the build if dependencies are not available
+* `disabled`: disables the option
+* `auto`: enables the option if dependencies are available
+
+==== Building `libepoxy`
+
+Rebuilding `libepoxy` should not normally be necessary as this library changes only very rarely. If you do want to build it from scratch, however, please follow the instructions below.
+
+Start by installing the necessary dependencies.
+
+[source,console]
+----
+$ sudo apt install -y libegl1-mesa-dev
+----
+
+Next, download a local copy of the `libepoxy` repository from GitHub:
+
+[source,console]
----
-meson compile -C build # use -j1 on Raspberry Pi 3 or earlier devices
-sudo meson install -C build
-sudo ldconfig # this is only necessary on the first build
+$ git clone https://github.com/anholt/libepoxy.git
----
-NOTE: If you are using an image where `rpicam-apps` have been previously installed as an `apt` package, and you want to run the new `rpicam-apps` executables from the same terminal window where you have just built and installed them, you may need to run `hash -r` to be sure to pick up the new ones over the system supplied ones.
+Navigate into the root directory of the repository:
-Finally, if you have not already done so, please be sure to follow the `dtoverlay` and display driver instructions in the xref:camera_software.adoc#getting-started[Getting Started section] (and rebooting if you changed anything there).
+[source,console]
+----
+$ cd libepoxy
+----
+
+Create a build directory at the root level of the repository, then navigate into that directory:
+
+[source,console]
+----
+$ mkdir _build
+$ cd _build
+----
+
+Next, run `meson` to configure the build environment:
+
+[source,console]
+----
+$ meson
+----
+
+Now, you can build `libexpoxy` with `ninja`:
+
+[source,console]
+----
+$ ninja
+----
+
+Finally, run the following command to install your freshly-built `libepoxy` binary:
+
+[source,console]
+----
+$ sudo ninja install
+----
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_getting_help.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_getting_help.adoc
index 74d200ecb4..8cf2367bc0 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_getting_help.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_getting_help.adoc
@@ -1,15 +1,17 @@
-=== Getting Help
+== Getting help
-For further help with `libcamera` and the `rpicam-apps`, the first port of call will usually be the https://forums.raspberrypi.com/viewforum.php?f=43[Raspberry Pi Camera Forum]. Before posting, it's helpful to:
+For further help with `libcamera` and the `rpicam-apps`, check the https://forums.raspberrypi.com/viewforum.php?f=43[Raspberry Pi Camera forum]. Before posting:
* Make a note of your operating system version (`uname -a`).
* Make a note of your `libcamera` and `rpicam-apps` versions (`rpicam-hello --version`).
-* Please report the make and model of the camera module you are using. Note that when third party camera module vendors supply their own software then we are normally unable to offer any support and all queries should be directed back to the vendor.
+* Report the make and model of the camera module you are using.
-* Please also provide information on what kind of a Raspberry Pi you have, including memory size.
+* Report the software you are trying to use. We don't support third-party camera module vendor software.
-* If it seems like it might be relevant, please include any excerpts from the application's console output.
+* Report your Raspberry Pi model, including memory size.
-When it seems likely that there are specific problems in the camera software (such as crashes) then it may be more appropriate to https://github.com/raspberrypi/rpicam-apps[create an issue in the `rpicam-apps` Github repository]. Again, please include all the helpful details that you can.
+* Include any relevant excerpts from the application's console output.
+
+If there are specific problems in the camera software (such as crashes), consider https://github.com/raspberrypi/rpicam-apps[creating an issue in the `rpicam-apps` GitHub repository], including the same details listed above.
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_getting_started.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_getting_started.adoc
deleted file mode 100644
index 8b0460e8dc..0000000000
--- a/documentation/asciidoc/computers/camera/rpicam_apps_getting_started.adoc
+++ /dev/null
@@ -1,72 +0,0 @@
-=== Getting Started
-
-==== Using the camera for the first time
-
-NOTE: On Raspberry Pi 3 and earlier devices running _Bullseye_ you need to re-enable _Glamor_ in order to make the X Windows hardware accelerated preview window work. To do this enter `sudo raspi-config` at a terminal window and then choose `Advanced Options`, `Glamor` and `Yes`. Finally quit `raspi-config` and let it reboot your Raspberry Pi.
-
-When running a recent version of Raspberry Pi OS, the 5 basic `rpicam-apps` are already installed. In this case, official Raspberry Pi cameras will also be detected and enabled automatically.
-
-You can check that everything is working by entering:
-
-[,bash]
-----
-rpicam-hello
-----
-
-You should see a camera preview window for about 5 seconds.
-
-NOTE: Raspberry Pi 3 and older devices running _Bullseye_ may not by default be using the correct display driver. Refer to the xref:config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] file and ensure that either `dtoverlay=vc4-fkms-v3d` or `dtoverlay=vc4-kms-v3d` is currently active. Please reboot if you needed to change this.
-
-==== If you do need to alter the configuration
-
-You may need to alter the camera configuration in your `/boot/firmware/config.txt` file if:
-
-* You are using a 3rd party camera (the manufacturer's instructions should explain the changes you need to make).
-
-* You are using an official Raspberry Pi camera but wish to use a non-standard driver/overlay.
-
-If you do need to add your own `dtoverlay`, the following are currently recognised.
-
-|===
-| Camera Module | In `/boot/firmware/config.txt`
-
-| V1 camera (OV5647)
-| `dtoverlay=ov5647`
-
-| V2 camera (IMX219)
-| `dtoverlay=imx219`
-
-| HQ camera (IMX477)
-| `dtoverlay=imx477`
-
-| GS camera (IMX296)
-| `dtoverlay=imx296`
-
-| Camera Module 3 (IMX708)
-| `dtoverlay=imx708`
-
-| IMX290 and IMX327
-| `dtoverlay=imx290,clock-frequency=74250000` or `dtoverlay=imx290,clock-frequency=37125000` (both modules share the imx290 kernel driver; please refer to instructions from the module vendor for the correct frequency)
-
-| IMX378
-| `dtoverlay=imx378`
-
-| OV9281
-| `dtoverlay=ov9281`
-|===
-
-To override the automatic camera detection, you will need to delete the entry `camera_auto_detect=1` if present in the `config.txt` file. Your Raspberry Pi will need to be rebooted after editing this file.
-
-NOTE: Setting `camera_auto_detect=0` disables the boot-time detection completely.
-
-=== Troubleshooting
-
-If the Camera Module isn't working correctly, there are number of things to try:
-
-* Is the ribbon cable attached to the Camera Serial Interface (CSI), not the Display Serial Interface (DSI)? The ribbon connector will fit into either port. The Camera port is located near the HDMI connector.
-* Are the ribbon connectors all firmly seated, and are they the right way round? They must be straight in their sockets.
-* Is the Camera Module connector, between the smaller black Camera Module itself and the PCB, firmly attached? Sometimes this connection can come loose during transit or when putting the Camera Module in a case. Using a fingernail, flip up the connector on the PCB, then reconnect it with gentle pressure. It engages with a very slight click. Don't force it; if it doesn't engage, it's probably slightly misaligned.
-* Have `sudo apt update` and `sudo apt full-upgrade` been run?
-* Is your power supply sufficient? The Camera Module adds about 200-250mA to the power requirements of your Raspberry Pi.
-* If you've checked all the above issues and the Camera Module is still not working, try posting on our forums for more help.
-
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_intro.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_intro.adoc
index 51f3661a4a..4accca0a8d 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_intro.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_intro.adoc
@@ -1,40 +1,47 @@
-== `libcamera` and `rpicam-apps`
+== `rpicam-apps`
-[WARNING]
+[NOTE]
====
-`rpicam-apps` applications have been renamed from `libcamera-\*` to `rpicam-*`. Symbolic links are installed to allow users to keep using the old application names, but these will be deprecated soon. Users are encouraged to adopt the new application names as soon as possible.
+Raspberry Pi OS _Bookworm_ renamed the camera capture applications from ``libcamera-\*`` to ``rpicam-*``. Symbolic links allow users to use the old names for now. **Adopt the new application names as soon as possible.** Raspberry Pi OS versions prior to _Bookworm_ still use the ``libcamera-*`` name.
====
-=== Introduction
+Raspberry Pi supplies a small set of example `rpicam-apps`. These CLI applications, built on top of `libcamera`, capture images and video from a camera. These applications include:
-`libcamera` is a new software library aimed at supporting complex camera systems directly from the Linux operating system. In the case of the Raspberry Pi it enables us to drive the camera system directly from open source code running on ARM processors. The proprietary code running on the Broadcom GPU, and to which users have no access at all, is almost completely by-passed.
+* `rpicam-hello`: A "hello world"-equivalent for cameras, which starts a camera preview stream and displays it on the screen.
+* `rpicam-jpeg`: Runs a preview window, then captures high-resolution still images.
+* `rpicam-still`: Emulates many of the features of the original `raspistill` application.
+* `rpicam-vid`: Captures video.
+* `rpicam-raw`: Captures raw (unprocessed Bayer) frames directly from the sensor.
+* `rpicam-detect`: Not built by default, but users can build it if they have TensorFlow Lite installed on their Raspberry Pi. Captures JPEG images when certain objects are detected.
-`libcamera` presents a {cpp} API to applications and works at the level of configuring the camera and then allowing an application to request image frames. These image buffers reside in system memory and can be passed directly to still image encoders (such as JPEG) or to video encoders (such as h.264), though such ancillary functions as encoding images or displaying them are strictly beyond the purview of `libcamera` itself.
+Recent versions of Raspberry Pi OS include the five basic `rpicam-apps`, so you can record images and videos using a camera even on a fresh Raspberry Pi OS installation.
-For this reason Raspberry Pi supplies a small set of example `rpicam-apps`. These are simple applications, built on top of `libcamera`, and are designed largely to emulate the function of the legacy stack built on Broadcom's proprietary GPU code (some users will recognise these legacy applications as `raspistill` and `raspivid`). The applications we provide are:
+Users can create their own `rpicam`-based applications with custom functionality to suit their own requirements. The https://github.com/raspberrypi/rpicam-apps[`rpicam-apps` source code] is freely available under a BSD-2-Clause licence.
-* _rpicam-hello_ A simple "hello world" application which starts a camera preview stream and displays it on the screen.
-* _rpicam-jpeg_ A simple application to run a preview window and then capture high resolution still images.
-* _rpicam-still_ A more complex still image capture application which emulates more of the features of the original `raspistill` application.
-* _rpicam-vid_ A video capture application.
-* _rpicam-raw_ A basic application for capturing raw (unprocessed Bayer) frames directly from the sensor.
-* _rpicam-detect_ This application is not built by default, but users can build it if they have TensorFlow Lite installed on their Raspberry Pi. It captures JPEG images when certain objects are detected.
+=== `libcamera`
-Raspberry Pi's `rpicam-apps` are not only command line applications that make it easy to capture images and video from the camera, they are also examples of how users can create their own rpicam-based applications with custom functionality to suit their own requirements. The source code for the `rpicam-apps` is freely available under a BSD 2-Clause licence at https://github.com/raspberrypi/rpicam-apps[].
+`libcamera` is an open-source software library aimed at supporting camera systems directly from the Linux operating system on Arm processors. Proprietary code running on the Broadcom GPU is minimised. For more information about `libcamera` see the https://libcamera.org[`libcamera` website].
-==== More about `libcamera`
+`libcamera` provides a {cpp} API that configures the camera, then allows applications to request image frames. These image buffers reside in system memory and can be passed directly to still image encoders (such as JPEG) or to video encoders (such as H.264). `libcamera` doesn't encode or display images itself: that that functionality, use `rpicam-apps`.
-`libcamera` is an open source Linux community project. More information is available at the https://libcamera.org[`libcamera` website].
+You can find the source code in the https://git.linuxtv.org/libcamera.git/[official libcamera repository]. The Raspberry Pi OS distribution uses a https://github.com/raspberrypi/libcamera.git[fork] to control updates.
-The `libcamera` source code can be found and checked out from the https://git.linuxtv.org/libcamera.git/[official libcamera repository], although we work from a https://github.com/raspberrypi/libcamera.git[fork] that lets us control when we get _libcamera_ updates.
+Underneath the `libcamera` core, we provide a custom pipeline handler. `libcamera` uses this layer to drive the sensor and image signal processor (ISP) on the Raspberry Pi. `libcamera` contains a collection of image-processing algorithms (IPAs) including auto exposure/gain control (AEC/AGC), auto white balance (AWB), and auto lens-shading correction (ALSC).
-Underneath the `libcamera` core, Raspberry Pi provides a custom _pipeline handler_, which is the layer that `libcamera` uses to drive the sensor and ISP (Image Signal Processor) on the Raspberry Pi itself. Also part of this is a collection of well-known _control algorithms_, or _IPAs_ (Image Processing Algorithms) in `libcamera` parlance, such as AEC/AGC (Auto Exposure/Gain Control), AWB (Auto White Balance), ALSC (Auto Lens Shading Correction) and so on.
+Raspberry Pi's implementation of `libcamera` supports the following cameras:
-All this code is open source and now runs on the Raspberry Pi's ARM cores. There is only a very thin layer of code on the GPU which translates Raspberry Pi's own control parameters into register writes for the Broadcom ISP.
-
-Raspberry Pi's implementation of `libcamera` supports not only the four standard Raspberry Pi cameras (the OV5647 or V1 camera, the IMX219 or V2 camera, the IMX477 or HQ camera and the IMX708 or Camera Module 3) but also third party senors such as the IMX290, IMX327, OV9281, IMX378. Raspberry Pi is keen to work with vendors who would like to see their sensors supported directly by `libcamera`.
-
-Moreover, Raspberry Pi supplies a _tuning file_ for each of these sensors which can be edited to change the processing performed by the Raspberry Pi hardware on the raw images received from the image sensor, including aspects like the colour processing, the amount of noise suppression or the behaviour of the control algorithms.
-
-For further information on `libcamera` for the Raspberry Pi, please consult the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning Guide for the Raspberry Pi cameras and libcamera].
+* Official cameras:
+** OV5647 (V1)
+** IMX219 (V2)
+** IMX708 (V3)
+** IMX477 (HQ)
+** IMX500 (AI)
+** IMX296 (GS)
+* Third-party sensors:
+** IMX290
+** IMX327
+** IMX378
+** IMX519
+** OV9281
+To extend support to a new sensor, https://git.linuxtv.org/libcamera.git/[contribute to `libcamera`].
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_libav.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_libav.adoc
deleted file mode 100644
index 1127ae25a2..0000000000
--- a/documentation/asciidoc/computers/camera/rpicam_apps_libav.adoc
+++ /dev/null
@@ -1,77 +0,0 @@
-=== libav integration with rpicam-vid
-
-`rpicam-vid` can use the ffmpeg/libav codec backend to encode audio and video streams and either save to a local file or stream over the network. At present, video is encoded through the hardware H.264 encoder, and audio is encoded by a number of available software encoders. To list the available output formats, use the `ffmpeg -formats` command.
-
-To enable the libav backend, use the `--codec libav` command line option. Once enabled, the following configuration options are available:
-
-----
- --libav-format, libav output format to be used
-----
-
-Set the libav output format to use. These output formats can be specified as containers (e.g. mkv, mp4, avi) or stream output (e.g. h264 or mpegts). If this option is not provided, libav tries to deduce the output format from the filename specified by the `-o` command line argument.
-
-Example: To save a video in an mkv container, the following commands are equivalent:
-
-----
-rpicam-vid --codec libav -o test.mkv
-rpicam-vid --codec libav --libav-format mkv -o test.raw
-----
-
-----
- --libav-audio, Enable audio recording
-----
-
-Set this option to enable audio encoding together with the video stream. When audio encoding is enabled, an output format that supports audio (e.g. mpegts, mkv, mp4) must be used.
-
-----
- --audio-codec, Selects the audio codec
-----
-
-Selects which software audio codec is used for encoding. By default `aac` is used. To list the available audio codecs, use the `ffmpeg -codec` command.
-
-----
- --audio-bitrate, Selects the audio bitrate
-----
-
-Sets the audio encoding bitrate in bits per second.
-
-Example: To record audio at 16 kilobits/sec with the mp2 codec use `rpicam-vid --codec libav -o test.mp4 --audio_codec mp2 --audio-bitrate 16384`
-
-----
- --audio-samplerate, Set the audio sampling rate
-----
-
-Set the audio sampling rate in Hz for encoding. Set to 0 (default) to use the input sample rate.
-
-----
- --audio-device, Chooses an audio recording device to use
-----
-
-Selects which ALSA input device to use for audio recording. The audio device string can be obtained with the following command:
-
-----
-$ pactl list | grep -A2 'Source #' | grep 'Name: '
- Name: alsa_output.platform-bcm2835_audio.analog-stereo.monitor
- Name: alsa_output.platform-fef00700.hdmi.hdmi-stereo.monitor
- Name: alsa_output.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.analog-stereo.monitor
- Name: alsa_input.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.mono-fallback
-----
-
-----
- --av-sync, Audio/Video sync control
-----
-This option can be used to shift the audio sample timestamp by a value given in microseconds relative to the video frame. Negative values may also be used.
-
-==== Network streaming with libav
-
-It is possible to use the libav backend as a network streaming source for audio/video. To do this, the output filename specified by the `-o` argument must be given as a protocol url, see https://ffmpeg.org/ffmpeg-protocols.html[ffmpeg protocols] for more details on protocol usage. Some examples:
-
-To stream audio/video using TCP
-----
-rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "tcp://0.0.0.0:1234?listen=1"
-----
-
-To stream audio/video using UDP
-----
-rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "udp://:"
-----
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
index 94cfe06f34..fb387443ae 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
@@ -1,12 +1,68 @@
-=== Multiple Cameras Usage
+=== Use multiple cameras
-Basic support for multiple cameras is available within `rpicam-apps`. Multiple cameras may be attached to a Raspberry Pi in the following ways:
+`rpicam-apps` has basic support for multiple cameras. You can attach multiple cameras to a Raspberry Pi in the following ways:
-* Two cameras connected directly to a Raspberry Pi Compute Module board, see the xref:../computers/compute-module.adoc#attach-a-raspberry-pi-camera-module[Compute Module documentation] for further details.
-* Two or more cameras attached to a non-compute Raspberry Pi board using a Video Mux board, like https://www.arducam.com/product/multi-camera-v2-1-adapter-raspberry-pi/[this 3rd party product].
+* For Raspberry Pi Compute Modules, you can connect two cameras directly to a Raspberry Pi Compute Module I/O board. See the xref:../computers/compute-module.adoc#attach-a-camera-module[Compute Module documentation] for further details. With this method, you can _use both cameras simultaneously_.
+* For Raspberry Pi 5, you can connect two cameras directly to the board using the dual MIPI connectors.
+* For other Raspberry Pi devices with a camera port, you can attach two or more cameras with a Video Mux board such as https://www.arducam.com/product/multi-camera-v2-1-adapter-raspberry-pi/[this third-party product]. Since both cameras are attached to a single Unicam port, _only one camera may be used at a time_.
-In the latter case, only one camera may be used at a time since both cameras are attached to a single Unicam port. For the former, both cameras can run simultaneously.
+To list all the cameras available on your platform, use the xref:camera_software.adoc#list-cameras[`list-cameras`] option. To choose which camera to use, pass the camera index to the xref:camera_software.adoc#camera[`camera`] option.
-To list all the cameras available on your platform, use the `--list-cameras` command line option. To choose which camera to use, use the `--camera ` option, and provide the index value of the requested camera.
+NOTE: `libcamera` does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes, meaning there is no way to synchronise 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera or use the software camera synchronisation support that is described below, switching the 3A to manual mode if necessary.
-NOTE: `libcamera` does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes. This means there is no way to synchronise sensor framing or 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera, and switch the 3A to manual mode if necessary.
+==== Software Camera Synchronisation
+
+Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.
+
+**How it works**
+
+The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small.
+
+The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different.
+
+Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all.
+
+In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message).
+
+When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you.
+
+**The Server**
+
+The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this.
+
+If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address.
+
+**Clients**
+
+Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's.
+
+The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded.
+
+Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!
+
+**Usage in `rpicam-vid`**
+
+We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us.
+
+First we should start the client:
+[source,console]
+----
+$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client
+----
+
+Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message.
+
+To start the server:
+[source,console]
+----
+$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server
+----
+
+This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised.
+
+The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information.
+
+In practical operation there are a few final points to be aware of:
+
+* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
+* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load.
\ No newline at end of file
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_packages.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_packages.adoc
index 56453eb32f..031fcc44e1 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_packages.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_packages.adoc
@@ -1,31 +1,15 @@
-=== `libcamera` and `rpicam-apps` Packages
+=== Install `libcamera` and `rpicam-apps`
-A number of `apt` packages are provided for convenience. In order to access them, we recommend keeping your OS up to date xref:../computers/os.adoc#using-apt[in the usual way].
+Raspberry Pi provides two `rpicam-apps` packages:
-==== Binary Packages
+* `rpicam-apps` contains full applications with support for previews using a desktop environment. This package is pre-installed in Raspberry Pi OS.
-There are two `rpicam-apps` packages available, that contain the necessary executables:
-
-* `rpicam-apps` contains the full applications with support for previews using a desktop environment. This package is pre-installed in Raspberry Pi OS.
-
-* `rpicam-apps-lite` omits desktop environment support and only the DRM preview is available. This package is pre-installed in Raspberry Pi OS Lite.
+* `rpicam-apps-lite` omits desktop environment support, and only makes the DRM preview available. This package is pre-installed in Raspberry Pi OS Lite.
==== Dependencies
-These applications depend on a number of library packages which are named _library-name_ where __ is a version number (actually the ABI, or Application Binary Interface, version), and which stands at zero at the time of writing. Thus we have the following:
-
-* The package `libcamera0` contains the `libcamera` libraries.
-
-* The package `libepoxy0` contains the `libepoxy` libraries.
-
-These will be installed automatically when needed.
-
-==== Dev Packages
-
-`rpicam-apps` can be rebuilt on their own without installing and building `libcamera` and `libepoxy` from scratch. To enable this, the following packages should be installed:
-
-* `rpicam-dev` contains the necessary `libcamera` header files and resources.
+`rpicam-apps` depends on library packages named `library-name`, where `` is the ABI version. Your package manager should install these automatically.
-* `libepoxy-dev` contains the necessary `libepoxy` header files and resources. You will only need this if you want support for the GLES/EGL preview window.
+==== Dev packages
-Subsequently `rpicam-apps` can be xref:camera_software.adoc#building-rpicam-apps-without-rebuilding-libcamera[checked out from GitHub and rebuilt].
+You can rebuild `rpicam-apps` without building `libcamera` and `libepoxy` from scratch. For more information, see xref:camera_software.adoc#building-rpicam-apps-without-building-libcamera[Building `rpicam-apps` without rebuilding `libcamera`].
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing.adoc
index c89a550fc9..339828d50f 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing.adoc
@@ -1,216 +1,243 @@
-=== Post-Processing
+== Post-processing with `rpicam-apps`
-`rpicam-apps` share a common post-processing framework. This allows them to pass the images received from the camera system through a number of custom image processing and image analysis routines. Each such routine is known as a _post-processing stage_ and the description of exactly which stages should be run, and what configuration they may have, is supplied in a JSON file. Every stage, along with its source code, is supplied with a short example JSON file showing how to enable it.
+`rpicam-apps` share a common post-processing framework. This allows them to pass the images received from the camera system through a number of custom image-processing and image-analysis routines. Each such routine is known as a _stage_. To run post-processing stages, supply a JSON file instructing the application which stages and options to apply. You can find example JSON files that use the built-in post-processing stages in the https://github.com/raspberrypi/rpicam-apps/tree/main/assets[`assets` folder of the `rpicam-apps` repository].
-For example, the simple _negate_ stage (which "negates" all the pixels in an image, turning light pixels dark and vice versa) is supplied with a `negate.json` file that configures the post-processing pipeline to run it:
-
-`rpicam-hello --post-process-file /path/to/negate.json`
-
-TIP: Example JSON files can be found in the `assets` folder of the `rpicam-apps` repository at https://github.com/raspberrypi/rpicam-apps/tree/main/assets[].
-
-The negate stage is particularly trivial and has no configuration parameters of its own, therefore the JSON file merely has to name the stage, with no further information, and it will be run. Thus `negate.json` contains
+For example, the **negate** stage turns light pixels dark and dark pixels light. Because the negate stage is basic, requiring no configuration, `negate.json` just names the stage:
+[source,json]
----
{
- "negate":
- {
- }
+ "negate": {}
}
----
-To run multiple post-processing stages, the contents of the example JSON files merely need to be listed together, and the stages will be run in the order given. For example, to run the Sobel stage (which applies a Sobel filter to an image) followed by the negate stage we could create a custom JSON file containing
+To apply the negate stage to an image, pass `negate.json` to the `post-process-file` option:
+
+[source,console]
+----
+$ rpicam-hello --post-process-file negate.json
+----
+
+To run multiple post-processing stages, create a JSON file that contains multiple stages as top-level keys. For example, to the following configuration runs the Sobel stage, then the negate stage:
+[source,json]
----
{
"sobel_cv":
{
"ksize": 5
},
- "negate":
- {
- }
+ "negate": {}
}
----
-The Sobel stage is implemented using OpenCV, hence `cv` in its name. Observe how it has a user-configurable parameter, `ksize` that specifies the kernel size of the filter to be used. In this case, the Sobel filter will produce bright edges on a black background, and the negate stage will turn this into dark edges on a white background, as shown.
+The xref:camera_software.adoc#sobel_cv-stage[Sobel stage] uses OpenCV, hence the `cv` suffix. It has a user-configurable parameter, `ksize`, that specifies the kernel size of the filter to be used. In this case, the Sobel filter produces bright edges on a black background, and the negate stage turns this into dark edges on a white background.
-image::images/sobel_negate.jpg[Image with Sobel and negate]
+.A negated Sobel filter.
+image::images/sobel_negate.jpg[A negated Sobel filter]
-Some stages actually alter the image in some way, and this is their primary function (such as _negate_). Others are primarily for image analysis, and while they may indicate something on the image, all they really do is generate useful information. For this reason we also have a very flexible form of _metadata_ that can be populated by the post-processing stages, and this will get passed all the way through to the application itself.
+Some stages, such as `negate`, alter the image in some way. Other stages analyse the image to generate metadata. Post-processing stages can pass this metadata to other stages and even the application.
-Image analysis stages often prefer to work on reduced resolution images. `rpicam-apps` are able to supply applications with a ready-made low resolution image provided directly by the ISP hardware, and this can be helpful in improving performance.
+To improve performance, image analysis often uses reduced resolution. `rpicam-apps` provide a dedicated low-resolution feed directly from the ISP.
-Furthermore, with the post-processing framework being completely open, Raspberry Pi welcomes the contribution of new and interesting stages from the community and would be happy to host them in our `rpicam-apps` repository. The stages that are currently available are documented below.
+NOTE: The `rpicam-apps` supplied with Raspberry Pi OS do not include OpenCV and TensorFlow Lite. As a result, certain post-processing stages that rely on them are disabled. To use these stages, xref:camera_software.adoc#build-libcamera-and-rpicam-apps[re-compile `rpicam-apps`]. On a Raspberry Pi 3 or 4 running a 32-bit kernel, compile with the `-DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon` flag to speed up certain stages.
-NOTE: The `rpicam-apps` supplied with the operating system will be built without any optional 3rd party libraries (such as OpenCV or TensorFlow Lite), meaning that certain post-processing stages that rely on them will not be enabled. To use these stages, please follow the instructions for xref:camera_software.adoc#building-libcamera-and-rpicam-apps[building `rpicam-apps` for yourself].
+=== Built-in stages
==== `negate` stage
-The `negate` stage requires no 3rd party libraries.
-
-On a Raspberry Pi 3 device or a Raspberry Pi 4 running a 32-bit OS, it may execute more quickly if recompiled using `-DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon`. (Please see the xref:camera_software.adoc#building-libcamera-and-rpicam-apps[build instructions].)
+This stage turns light pixels dark and dark pixels light.
The `negate` stage has no user-configurable parameters.
Default `negate.json` file:
+[source,json]
----
{
- "negate":
- {
- }
+ "negate" : {}
}
----
-Example:
+Run the following command to use this stage file with `rpicam-hello`:
+
+[source,console]
+----
+$ rpicam-hello --post-process-file negate.json
+----
-image::images/negate.jpg[Image with negate]
+Example output:
-==== `hdr` stage
+.A negated image.
+image::images/negate.jpg[A negated image]
-The `hdr` stage implements both HDR (high dynamic range) imaging and DRC (dynamic range compression). The terminology that we use here regards DRC as operating on single images, and HDR works by accumulating multiple under-exposed images and then performing the same algorithm as DRC.
+==== `hdr` stage
-The `hdr` stage has no dependencies on 3rd party libraries, but (like some other stages) may execute more quickly on Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS if recompiled using `-DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon` (please see the xref:camera_software.adoc#building-libcamera-and-rpicam-apps[build instructions]). Specifically, the image accumulation stage will run quicker and result in fewer frame drops, though the tonemapping part of the process is unchanged.
+This stage emphasises details in images using High Dynamic Range (HDR) and Dynamic Range Compression (DRC). DRC uses a single image, while HDR combines multiple images for a similar result.
-The basic procedure is that we take the image (which in the case of HDR may be multiple images accumulated together) and apply an edge-preserving smoothing filter to generate a low pass (LP) image. We define the high pass (HP) image to be the difference between the LP image and the original. Next we apply a global tonemap to the LP image and add back the HP image. This procedure, in contrast to applying the tonemap directly to the original image, prevents us from squashing and losing all the local contrast in the resulting image.
+Parameters fall into three groups: the LP filter, global tonemapping, and local contrast.
-It is worth noting that this all happens using fully-processed images, once the ISP has finished with them. HDR normally works better when carried out in the raw (Bayer) domain, as signals are still linear and have greater bit-depth. We expect to implement such functionality once `libcamera` exports an API for "re-processing" Bayer images that do not come from the sensor, but which application code can pass in.
+This stage applies a smoothing filter to the fully-processed input images to generate a low pass (LP) image. It then generates the high pass (HP) image from the diff of the original and LP images. Then, it applies a global tonemap to the LP image and adds it back to the HP image. This process helps preserve local contrast.
-In summary, the user-configurable parameters fall broadly into three groups: those that define the LP filter, those responsible for the global tonemapping, and those responsible for re-applying the local contrast.
+You can configure this stage with the following parameters:
-[cols=",^"]
+[cols="1,3a"]
|===
-| num_frames | The number of frames to accumulate. For DRC (in our terminology) this would take the value 1, but for multi-frame HDR we would suggest a value such as 8.
-| lp_filter_strength | The coefficient of the low pass IIR filter.
-| lp_filter_threshold | A piecewise linear function that relates the pixel level to the threshold that is regarded as being "meaningful detail".
-| global_tonemap_points | A list of points in the input image histogram and targets in the output range where we wish to move them. We define an inter-quantile mean (`q` and `width`), a target as a proportion of the full output range (`target`) and maximum and minimum gains by which we are prepared to move the measured inter-quantile mean (as this prevents us from changing an image too drastically).
-| global_tonemap_strength | Strength of application of the global tonemap.
-| local_pos_strength | A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for positive (bright) detail.
-| local_neg_strength | A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for negative (dark) detail.
-| local_tonemap_strength | An overall gain applied to all local contrast that is added back.
-| local_colour_scale | A factor that allows the output colours to be affected more or less strongly.
+| `num_frames`
+| The number of frames to accumulate; for DRC, use 1; for HDR, try 8
+| `lp_filter_strength`
+| The coefficient of the low pass IIR filter.
+| `lp_filter_threshold`
+| A piecewise linear function that relates pixel level to the threshold of meaningful detail
+| `global_tonemap_points`
+| Points in the input image histogram mapped to targets in the output range where we wish to move them. Uses the following sub-configuration:
+
+* an inter-quantile mean (`q` and `width`)
+* a target as a proportion of the full output range (`target`)
+* maximum (`max_up`) and minimum (`max_down`) gains to move the measured inter-quantile mean, to prevents the image from changing image too drastically
+| `global_tonemap_strength`
+| Strength of application of the global tonemap
+| `local_pos_strength`
+| A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for positive (bright) detail
+| `local_neg_strength`
+| A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for negative (dark) detail
+| `local_tonemap_strength`
+| An overall gain applied to all local contrast that is added back
+| `local_colour_scale`
+| A factor that allows the output colours to be affected more or less strongly
|===
-We note that the overall strength of the processing is best controlled by changing the `global_tonemap_strength` and `local_tonemap_strength` parameters.
+To control processing strength, changing the `global_tonemap_strength` and `local_tonemap_strength` parameters.
-The full processing takes between 2 and 3 seconds for a 12MP image on a Raspberry Pi 4. The stage runs only on the still image capture, it ignores preview and video images. In particular, when accumulating multiple frames, the stage "swallows" the output images so that the application does not receive them, and finally sends through only the combined and processed image.
+Processing a single image takes between two and three seconds for a 12MP image on a Raspberry Pi 4. When accumulating multiple frames, this stage sends only the processed image to the application.
Default `drc.json` file for DRC:
+[source,json]
----
{
- "hdr" :
- {
- "num_frames" : 1,
- "lp_filter_strength" : 0.2,
- "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
- "global_tonemap_points" :
- [
- { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 1.5, "max_down": 0.7 },
- { "q": 0.5, "width": 0.05, "target": 0.5, "max_up": 1.5, "max_down": 0.7 },
- { "q": 0.8, "width": 0.05, "target": 0.8, "max_up": 1.5, "max_down": 0.7 }
- ],
- "global_tonemap_strength" : 1.0,
- "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
- "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
- "local_tonemap_strength" : 1.0,
- "local_colour_scale" : 0.9
+ "hdr" : {
+ "num_frames" : 1,
+ "lp_filter_strength" : 0.2,
+ "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
+ "global_tonemap_points" :
+ [
+ { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 1.5, "max_down": 0.7 },
+ { "q": 0.5, "width": 0.05, "target": 0.5, "max_up": 1.5, "max_down": 0.7 },
+ { "q": 0.8, "width": 0.05, "target": 0.8, "max_up": 1.5, "max_down": 0.7 }
+ ],
+ "global_tonemap_strength" : 1.0,
+ "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
+ "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
+ "local_tonemap_strength" : 1.0,
+ "local_colour_scale" : 0.9
}
}
----
Example:
-Without DRC:
-
+.Image without DRC processing
image::images/nodrc.jpg[Image without DRC processing]
-With full-strength DRC: (use `rpicam-still -o test.jpg --post-process-file drc.json`)
+Run the following command to use this stage file with `rpicam-still`:
+[source,console]
+----
+$ rpicam-still -o test.jpg --post-process-file drc.json
+----
+
+.Image with DRC processing
image::images/drc.jpg[Image with DRC processing]
Default `hdr.json` file for HDR:
+[source,json]
----
{
- "hdr" :
- {
- "num_frames" : 8,
- "lp_filter_strength" : 0.2,
- "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
- "global_tonemap_points" :
- [
- { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 5.0, "max_down": 0.5 },
- { "q": 0.5, "width": 0.05, "target": 0.45, "max_up": 5.0, "max_down": 0.5 },
- { "q": 0.8, "width": 0.05, "target": 0.7, "max_up": 5.0, "max_down": 0.5 }
- ],
- "global_tonemap_strength" : 1.0,
- "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
- "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
- "local_tonemap_strength" : 1.0,
- "local_colour_scale" : 0.8
+ "hdr" : {
+ "num_frames" : 8,
+ "lp_filter_strength" : 0.2,
+ "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
+ "global_tonemap_points" :
+ [
+ { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 5.0, "max_down": 0.5 },
+ { "q": 0.5, "width": 0.05, "target": 0.45, "max_up": 5.0, "max_down": 0.5 },
+ { "q": 0.8, "width": 0.05, "target": 0.7, "max_up": 5.0, "max_down": 0.5 }
+ ],
+ "global_tonemap_strength" : 1.0,
+ "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
+ "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
+ "local_tonemap_strength" : 1.0,
+ "local_colour_scale" : 0.8
}
}
----
Example:
-Without HDR:
-
+.Image without HDR processing
image::images/nohdr.jpg[Image without HDR processing]
-With HDR: (use `rpicam-still -o test.jpg --ev -2 --denoise cdn_off --post-process-file hdr.json`)
+Run the following command to use this stage file with `rpicam-still`:
+
+[source,console]
+----
+$ rpicam-still -o test.jpg --ev -2 --denoise cdn_off --post-process-file hdr.json
+----
+.Image with HDR processing
image::images/hdr.jpg[Image with DRC processing]
==== `motion_detect` stage
-The `motion_detect` stage works by analysing frames from the low resolution image stream, which must be configured for it to work. It compares a region of interest ("roi") in the frame to the corresponding part of a previous one and if enough pixels are sufficiently different, that will be taken to indicate motion. The result is added to the metadata under "motion_detect.result".
+The `motion_detect` stage analyses frames from the low-resolution image stream. You must configure the low-resolution stream to use this stage. The stage detects motion by comparing a region of interest (ROI) in the frame to the corresponding part of a previous frame. If enough pixels change between frames, this stage indicates the motion in metadata under the `motion_detect.result` key.
-This stage has no dependencies on any 3rd party libraries.
+This stage has no dependencies on third-party libraries.
-It has the following tunable parameters. The dimensions are always given as a proportion of the low resolution image size.
+You can configure this stage with the following parameters, passing dimensions as a proportion of the low-resolution image size between 0 and 1:
-[cols=",^"]
+[cols="1,3"]
|===
-| roi_x | x-offset of the region of interest for the comparison
-| roi_y | y-offset of the region of interest for the comparison
-| roi_width | width of the region of interest for the comparison
-| roi_height | height of the region of interest for the comparison
-| difference_m | Linear coefficient used to construct the threshold for pixels being different
-| difference_c | Constant coefficient used to construct the threshold for pixels being different according to threshold = difference_m * pixel_value + difference_c
-| frame_period | The motion detector will run only this many frames
-| hskip | The pixel tests are subsampled by this amount horizontally
-| vksip | The pixel tests are subsampled by this amount vertically
-| region_threshold | The proportion of pixels (or "regions") which must be categorised as different for them to count as motion
-| verbose | Print messages to the console, including when the "motion"/"no motion" status changes
+| `roi_x` | x-offset of the region of interest for the comparison (proportion between 0 and 1)
+| `roi_y` | y-offset of the region of interest for the comparison (proportion between 0 and 1)
+| `roi_width` | Width of the region of interest for the comparison (proportion between 0 and 1)
+| `roi_height` | Height of the region of interest for the comparison (proportion between 0 and 1)
+| `difference_m` | Linear coefficient used to construct the threshold for pixels being different
+| `difference_c` | Constant coefficient used to construct the threshold for pixels being different according to `threshold = difference_m * pixel_value + difference_c`
+| `frame_period` | The motion detector will run only this many frames
+| `hskip` | The pixel subsampled by this amount horizontally
+| `vksip` | The pixel subsampled by this amount vertically
+| `region_threshold` | The proportion of pixels (regions) which must be categorised as different for them to count as motion
+| `verbose` | Print messages to the console, including when the motion status changes
|===
Default `motion_detect.json` configuration file:
+[source,json]
----
{
- "motion_detect" :
- {
- "roi_x" : 0.1,
- "roi_y" : 0.1,
- "roi_width" : 0.8,
- "roi_height" : 0.8,
- "difference_m" : 0.1,
- "difference_c" : 10,
- "region_threshold" : 0.005,
- "frame_period" : 5,
- "hskip" : 2,
- "vskip" : 2,
- "verbose" : 0
+ "motion_detect" : {
+ "roi_x" : 0.1,
+ "roi_y" : 0.1,
+ "roi_width" : 0.8,
+ "roi_height" : 0.8,
+ "difference_m" : 0.1,
+ "difference_c" : 10,
+ "region_threshold" : 0.005,
+ "frame_period" : 5,
+ "hskip" : 2,
+ "vskip" : 2,
+ "verbose" : 0
}
}
----
-Note that the field `difference_m` and `difference_c`, and the value of `region_threshold`, can be adjusted to make the algorithm more or less sensitive to motion.
-
-If the amount of computation needs to be reduced (perhaps you have other stages that need a larger low resolution image), the amount of computation can be reduced using the `hskip` and `vskip` parameters.
+Adjust the differences and the threshold to make the algorithm more or less sensitive. To improve performance, use the `hskip` and `vskip` parameters.
-To use the `motion_detect` stage you might enter the following example command:
+Run the following command to use this stage file with `rpicam-hello`:
-`rpicam-hello --lores-width 128 --lores-height 96 --post-process-file motion_detect.json`
+[source,console]
+----
+$ rpicam-hello --lores-width 128 --lores-height 96 --post-process-file motion_detect.json
+----
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_opencv.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_opencv.adoc
index 24ac245475..787393e966 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_opencv.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_opencv.adoc
@@ -1,23 +1,25 @@
-=== Post-Processing with OpenCV
+=== Post-processing with OpenCV
-NOTE: These stages all require OpenCV to be installed on your system. You may also need to rebuild `rpicam-apps` with OpenCV support - please see the instructions for xref:camera_software.adoc#building-libcamera-and-rpicam-apps[building `rpicam-apps` for yourself].
+NOTE: These stages require an OpenCV installation. You may need to xref:camera_software.adoc#build-libcamera-and-rpicam-apps[rebuild `rpicam-apps` with OpenCV support].
==== `sobel_cv` stage
-The `sobel_cv` stage has the following user-configurable parameters:
+This stage applies a https://en.wikipedia.org/wiki/Sobel_operator[Sobel filter] to an image to emphasise edges.
-[cols=",^"]
+You can configure this stage with the following parameters:
+
+[cols="1,3"]
|===
-| ksize | Kernel size of the Sobel filter
+| `ksize` | Kernel size of the Sobel filter
|===
Default `sobel_cv.json` file:
+[source,json]
----
{
- "sobel_cv":
- {
+ "sobel_cv" : {
"ksize": 5
}
}
@@ -25,33 +27,34 @@ Default `sobel_cv.json` file:
Example:
-image::images/sobel.jpg[Image with Sobel filter]
+.Using a Sobel filter to emphasise edges.
+image::images/sobel.jpg[Using a Sobel filter to emphasise edges]
==== `face_detect_cv` stage
-This stage uses the OpenCV Haar classifier to detect faces in an image. It returns the face locations in the metadata (under the key "face_detect.results"), and optionally draws them on the image.
+This stage uses the OpenCV Haar classifier to detect faces in an image. It returns face location metadata under the key `face_detect.results` and optionally draws the locations on the image.
-The `face_detect_cv` stage has the following user-configurable parameters:
+You can configure this stage with the following parameters:
-[cols=",^"]
+[cols=",3]
|===
-| cascade_name | Name of the file where the Haar cascade can be found.
-| scaling_factor | Determines range of scales at which the image is searched for faces.
-| min_neighbors | Minimum number of overlapping neighbours required to count as a face.
-| min_size | Minimum face size.
-| max_size | Maximum face size.
-| refresh_rate | How many frames to wait before trying to re-run the face detector.
-| draw_features | Whether to draw face locations on the returned image.
+| `cascade_name` | Name of the file where the Haar cascade can be found
+| `scaling_factor` | Determines range of scales at which the image is searched for faces
+| `min_neighbors` | Minimum number of overlapping neighbours required to count as a face
+| `min_size` | Minimum face size
+| `max_size` | Maximum face size
+| `refresh_rate` | How many frames to wait before trying to re-run the face detector
+| `draw_features` | Whether to draw face locations on the returned image
|===
-The `face_detect_cv" stage runs only during preview and video capture; it ignores still image capture. It runs on the low resolution stream which would normally be configured to a resolution from about 320x240 to 640x480 pixels.
+The `face_detect_cv` stage runs only during preview and video capture. It ignores still image capture. It runs on the low resolution stream with a resolution between 320×240 and 640×480 pixels.
Default `face_detect_cv.json` file:
+[source,json]
----
{
- "face_detect_cv":
- {
+ "face_detect_cv" : {
"cascade_name" : "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml",
"scaling_factor" : 1.1,
"min_neighbors" : 2,
@@ -65,46 +68,53 @@ Default `face_detect_cv.json` file:
Example:
-image::images/face_detect.jpg[Image showing faces]
+.Drawing detected faces onto an image.
+image::images/face_detect.jpg[Drawing detected faces onto an image]
==== `annotate_cv` stage
-This stage allows text to be written into the top corner of images. It allows the same `%` substitutions as the `--info-text` parameter.
+This stage writes text into the top corner of images using the same `%` substitutions as the xref:camera_software.adoc#info-text[`info-text`] option.
+
+Interprets xref:camera_software.adoc#info-text[`info-text` directives] first, then passes any remaining tokens to https://www.man7.org/linux/man-pages/man3/strftime.3.html[`strftime`].
+
+For example, to achieve a datetime stamp on the video, pass `%F %T %z`:
-Additionally to the flags of xref:camera_software.adoc#preview-window-2[`--info-text`] you can provide any token that https://www.man7.org/linux/man-pages/man3/strftime.3.html[strftime] understands to display the current date / time.
-The `--info-text` tokens are interpreted first and any percentage token left is then interpreted by strftime. To achieve a datetime stamp on the video you can use e.g. `%F %T %z` (%F for the ISO-8601 date (2023-03-07), %T for 24h local time (09:57:12) and %z for the timezone difference to UTC (-0800)).
+* `%F` displays the ISO-8601 date (2023-03-07)
+* `%T` displays 24h local time (e.g. "09:57:12")
+* `%z` displays the timezone relative to UTC (e.g. "-0800")
-The stage does not output any metadata, but if it finds metadata under the key "annotate.text" it will write this text in place of anything in the JSON configuration file. This allows other post-processing stages to pass it text strings to be written onto the top of the images.
+This stage does not output any metadata, but it writes metadata found in `annotate.text` in place of anything in the JSON configuration file. This allows other post-processing stages to write text onto images.
-The `annotate_cv` stage has the following user-configurable parameters:
+You can configure this stage with the following parameters:
-[cols=",^"]
+[cols="1,3"]
|===
-| text | The text string to be written.
-| fg | Foreground colour.
-| bg | Background colour.
-| scale | A number proportional to the size of the text.
-| thickness | A number that determines the thickness of the text.
-| alpha | The amount of "alpha" to apply when overwriting the background pixels.
+| `text` | The text string to be written
+| `fg` | Foreground colour
+| `bg` | Background colour
+| `scale` | A number proportional to the size of the text
+| `thickness` | A number that determines the thickness of the text
+| `alpha` | The amount of alpha to apply when overwriting background pixels
|===
Default `annotate_cv.json` file:
+[source,json]
----
{
- "annotate_cv" :
- {
- "text" : "Frame %frame exp %exp ag %ag dg %dg",
- "fg" : 255,
- "bg" : 0,
- "scale" : 1.0,
- "thickness" : 2,
- "alpha" : 0.3
+ "annotate_cv" : {
+ "text" : "Frame %frame exp %exp ag %ag dg %dg",
+ "fg" : 255,
+ "bg" : 0,
+ "scale" : 1.0,
+ "thickness" : 2,
+ "alpha" : 0.3
}
}
----
Example:
-image::images/annotate.jpg[Image with text overlay]
+.Writing camera and date information onto an image with annotations.
+image::images/annotate.jpg[Writing camera and date information onto an image with annotations]
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_tflite.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_tflite.adoc
index ddeb9bc3a5..39d607f5e9 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_tflite.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_tflite.adoc
@@ -1,186 +1,220 @@
=== Post-Processing with TensorFlow Lite
-NOTE: These stages require TensorFlow Lite (TFLite) libraries to be installed that export the {cpp} API. Unfortunately the TFLite libraries are not normally distributed conveniently in this form, however, one place where they can be downloaded is https://lindevs.com/install-precompiled-tensorflow-lite-on-raspberry-pi/[lindevs.com]. Please follow the installation instructions given on that page. Subsequently you may need to recompile `rpicam-apps` with TensorFlow Lite support - please follow the instructions for xref:camera_software.adoc#building-libcamera-and-rpicam-apps[building `rpicam-apps` for yourself].
+==== Prerequisites
+
+These stages require TensorFlow Lite (TFLite) libraries that export the {cpp} API. TFLite doesn't distribute libraries in this form, but you can download and install a version that exports the API from https://lindevs.com/install-precompiled-tensorflow-lite-on-raspberry-pi/[lindevs.com].
+
+After installing, you must xref:camera_software.adoc#build-libcamera-and-rpicam-apps[recompile `rpicam-apps` with TensorFlow Lite support].
==== `object_classify_tf` stage
-`object_classify_tf` uses a Google MobileNet v1 model to classify objects in the camera image. It can be obtained from https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz[], which will need to be uncompressed. You will also need the `labels.txt` file which can be found in https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz[].
+Download: https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz[]
+
+`object_classify_tf` uses a Google MobileNet v1 model to classify objects in the camera image. This stage requires a https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz[`labels.txt` file].
-This stage has the following configuratble parameters.
+You can configure this stage with the following parameters:
-[cols=",^"]
+[cols="1,3"]
|===
-| top_n_results | How many results to show
-| refresh_rate | The number of frames that must elapse before the model is re-run
-| threshold_high | Confidence threshold (between 0 and 1) where objects are considered as being present
-| threshold_low | Confidence threshold which objects must drop below before being discarded as matches
-| model_file | Pathname to the tflite model file
-| labels_file | Pathname to the file containing the object labels
-| display_labels | Whether to display the object labels on the image. Note that this causes `annotate.text` metadata to be inserted so that the text can be rendered subsequently by the `annotate_cv` stage
-| verbose | Output more information to the console
+| `top_n_results` | The number of results to show
+| `refresh_rate` | The number of frames that must elapse between model runs
+| `threshold_high` | Confidence threshold (between 0 and 1) where objects are considered as being present
+| `threshold_low` | Confidence threshold which objects must drop below before being discarded as matches
+| `model_file` | Filepath of the TFLite model file
+| `labels_file` | Filepath of the file containing the object labels
+| `display_labels` | Whether to display the object labels on the image; inserts `annotate.text` metadata for the `annotate_cv` stage to render
+| `verbose` | Output more information to the console
|===
Example `object_classify_tf.json` file:
+[source,json]
----
{
- "object_classify_tf":
- {
+ "object_classify_tf" : {
"top_n_results" : 2,
"refresh_rate" : 30,
"threshold_high" : 0.6,
"threshold_low" : 0.4,
- "model_file" : "/home/pi/models/mobilenet_v1_1.0_224_quant.tflite",
- "labels_file" : "/home/pi/models/labels.txt",
+ "model_file" : "/home//models/mobilenet_v1_1.0_224_quant.tflite",
+ "labels_file" : "/home//models/labels.txt",
"display_labels" : 1
},
- "annotate_cv" :
- {
- "text" : "",
- "fg" : 255,
- "bg" : 0,
- "scale" : 1.0,
- "thickness" : 2,
- "alpha" : 0.3
+ "annotate_cv" : {
+ "text" : "",
+ "fg" : 255,
+ "bg" : 0,
+ "scale" : 1.0,
+ "thickness" : 2,
+ "alpha" : 0.3
}
}
----
-The stage operates on a low resolution stream image of size 224x224, so it could be used as follows:
+The stage operates on a low resolution stream image of size 224×224.
+Run the following command to use this stage file with `rpicam-hello`:
-`rpicam-hello --post-process-file object_classify_tf.json --lores-width 224 --lores-height 224`
+[source,console]
+----
+$ rpicam-hello --post-process-file object_classify_tf.json --lores-width 224 --lores-height 224
+----
-image::images/classify.jpg[Image showing object classifier results]
+.Object classification of a desktop computer and monitor.
+image::images/classify.jpg[Object classification of a desktop computer and monitor]
==== `pose_estimation_tf` stage
-`pose_estimation_tf` uses a Google MobileNet v1 model `posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite` that can be found at https://github.com/Qengineering/TensorFlow_Lite_Pose_RPi_32-bits[].
+Download: https://github.com/Qengineering/TensorFlow_Lite_Pose_RPi_32-bits[]
-This stage has the following configurable parameters.
+`pose_estimation_tf` uses a Google MobileNet v1 model to detect pose information.
-[cols=",^"]
+You can configure this stage with the following parameters:
+
+[cols="1,3"]
|===
-| refresh_rate | The number of frames that must elapse before the model is re-run
-| model_file | Pathname to the tflite model file
-| verbose | Output more information to the console
+| `refresh_rate` | The number of frames that must elapse between model runs
+| `model_file` | Filepath of the TFLite model file
+| `verbose` | Output extra information to the console
|===
-Also provided is a separate `plot_pose_cv` stage which can be included in the JSON configuration file and which will draw the detected pose onto the main image. This stage has the following configuration parameters.
+Use the separate `plot_pose_cv` stage to draw the detected pose onto the main image.
+
+You can configure the `plot_pose_cv` stage with the following parameters:
-[cols=",^"]
+[cols="1,3"]
|===
-| confidence_threshold | A confidence level determining how much is drawn. This number can be less than zero; please refer to the GitHub repository for more information.
+| `confidence_threshold` | Confidence threshold determining how much to draw; can be less than zero
|===
Example `pose_estimation_tf.json` file:
+[source,json]
----
{
- "pose_estimation_tf":
- {
+ "pose_estimation_tf" : {
"refresh_rate" : 5,
"model_file" : "posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite"
},
- "plot_pose_cv" :
- {
- "confidence_threshold" : -0.5
+ "plot_pose_cv" : {
+ "confidence_threshold" : -0.5
}
}
----
-The stage operates on a low resolution stream image of size 257x257 (but which must be rounded up to 258x258 for YUV420 images), so it could be used as follows:
+The stage operates on a low resolution stream image of size 257×257. **Because YUV420 images must have even dimensions, round up to 258×258 for YUV420 images.**
-`rpicam-hello --post-process-file pose_estimation_tf.json --lores-width 258 --lores-height 258`
+Run the following command to use this stage file with `rpicam-hello`:
+
+[source,console]
+----
+$ rpicam-hello --post-process-file pose_estimation_tf.json --lores-width 258 --lores-height 258
+----
-image::images/pose.jpg[Image showing pose estimation results]
+.Pose estimation of an adult human male.
+image::images/pose.jpg[Pose estimation of an adult human male]
==== `object_detect_tf` stage
-`object_detect_tf` uses a Google MobileNet v1 SSD (Single Shot Detector) model. The model and labels files can be downloaded from https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip[].
+Download: https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip[]
+
+`object_detect_tf` uses a Google MobileNet v1 SSD (Single Shot Detector) model to detect and label objects.
-This stage has the following configurable parameters.
+You can configure this stage with the following parameters:
-[cols=",^"]
+[cols="1,3"]
|===
-| refresh_rate | The number of frames that must elapse before the model is re-run
-| model_file | Pathname to the tflite model file
-| labels_file | Pathname to the file containing the list of labels
-| confidence_threshold | Minimum confidence threshold because a match is accepted.
-| overlap_threshold | Determines the amount of overlap between matches for them to be merged as a single match.
-| verbose | Output more information to the console
+| `refresh_rate` | The number of frames that must elapse between model runs
+| `model_file` | Filepath of the TFLite model file
+| `labels_file` | Filepath of the file containing the list of labels
+| `confidence_threshold` | Confidence threshold before accepting a match
+| `overlap_threshold` | Determines the amount of overlap between matches for them to be merged as a single match.
+| `verbose` | Output extra information to the console
|===
-Also provided is a separate `object_detect_draw_cv` stage which can be included in the JSON configuration file and which will draw the detected objects onto the main image. This stage has the following configuration parameters.
+Use the separate `object_detect_draw_cv` stage to draw the detected objects onto the main image.
+
+You can configure the `object_detect_draw_cv` stage with the following parameters:
-[cols=",^"]
+[cols="1,3"]
|===
-| line_thickness | Thickness of the bounding box lines
-| font_size | Size of the font used for the label
+| `line_thickness` | Thickness of the bounding box lines
+| `font_size` | Size of the font used for the label
|===
Example `object_detect_tf.json` file:
+[source,json]
----
{
- "object_detect_tf":
- {
- "number_of_threads" : 2,
- "refresh_rate" : 10,
- "confidence_threshold" : 0.5,
- "overlap_threshold" : 0.5,
- "model_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/detect.tflite",
- "labels_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/labelmap.txt",
- "verbose" : 1
+ "object_detect_tf" : {
+ "number_of_threads" : 2,
+ "refresh_rate" : 10,
+ "confidence_threshold" : 0.5,
+ "overlap_threshold" : 0.5,
+ "model_file" : "/home//models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/detect.tflite",
+ "labels_file" : "/home//models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/labelmap.txt",
+ "verbose" : 1
},
- "object_detect_draw_cv":
- {
- "line_thickness" : 2
+ "object_detect_draw_cv" : {
+ "line_thickness" : 2
}
}
----
-The stage operates on a low resolution stream image of size 300x300. The following example would pass a 300x300 crop to the detector from the centre of the 400x300 low resolution image.
+The stage operates on a low resolution stream image of size 300×300. Run the following command, which passes a 300×300 crop to the detector from the centre of the 400×300 low resolution image, to use this stage file with `rpicam-hello`:
-`rpicam-hello --post-process-file object_detect_tf.json --lores-width 400 --lores-height 300`
+[source,console]
+----
+$ rpicam-hello --post-process-file object_detect_tf.json --lores-width 400 --lores-height 300
+----
-image::images/detection.jpg[Image showing detected objects]
+.Detecting apple and cat objects.
+image::images/detection.jpg[Detecting apple and cat objects]
==== `segmentation_tf` stage
-`segmentation_tf` uses a Google MobileNet v1 model. The model file can be downloaded from https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite[], whilst the labels file can be found in the `assets` folder, named `segmentation_labels.txt`.
+Download: https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite[]
-This stage runs on an image of size 257x257. Because YUV420 images must have even dimensions, the low resolution image should be at least 258 pixels in both width and height. The stage adds a vector of 257x257 values to the image metadata where each value indicates which of the categories (listed in the labels file) that the pixel belongs to. Optionally, a representation of the segmentation can be drawn into the bottom right corner of the image.
+`segmentation_tf` uses a Google MobileNet v1 model. This stage requires a label file, found at the `assets/segmentation_labels.txt`.
-This stage has the following configurable parameters.
+This stage runs on an image of size 257×257. Because YUV420 images must have even dimensions, the low resolution image should be at least 258 pixels in both width and height. The stage adds a vector of 257×257 values to the image metadata where each value indicates the categories a pixel belongs to. You can optionally draw a representation of the segmentation into the bottom right corner of the image.
-[cols=",^"]
+You can configure this stage with the following parameters:
+
+[cols="1,3"]
|===
-| refresh_rate | The number of frames that must elapse before the model is re-run
-| model_file | Pathname to the tflite model file
-| labels_file | Pathname to the file containing the list of labels
-| threshold | When verbose is set, the stage prints to the console any labels where the number of pixels with that label (in the 257x257 image) exceeds this threshold.
-| draw | Set this value to draw the segmentation map into the bottom right hand corner of the image.
-| verbose | Output more information to the console
+| `refresh_rate` | The number of frames that must elapse between model runs
+| `model_file` | Filepath of the TFLite model file
+| `labels_file` | Filepath of the file containing the list of labels
+| `threshold` | When verbose is set, prints when the number of pixels with any label exceeds this number
+| `draw` | Draws the segmentation map into the bottom right hand corner of the image
+| `verbose` | Output extra information to the console
|===
Example `segmentation_tf.json` file:
+[source,json]
----
{
- "segmentation_tf":
- {
- "number_of_threads" : 2,
- "refresh_rate" : 10,
- "model_file" : "/home/pi/models/lite-model_deeplabv3_1_metadata_2.tflite",
- "labels_file" : "/home/pi/models/segmentation_labels.txt",
- "draw" : 1,
- "verbose" : 1
+ "segmentation_tf" : {
+ "number_of_threads" : 2,
+ "refresh_rate" : 10,
+ "model_file" : "/home//models/lite-model_deeplabv3_1_metadata_2.tflite",
+ "labels_file" : "/home//models/segmentation_labels.txt",
+ "draw" : 1,
+ "verbose" : 1
}
}
----
-This example takes a square camera image and reduces it to 258x258 pixels in size. In fact the stage also works well when non-square images are squashed unequally down to 258x258 pixels without cropping. The image below shows the segmentation map in the bottom right hand corner.
+This example takes a camera image and reduces it to 258×258 pixels in size. This stage even works when squashing a non-square image without cropping. This example enables the segmentation map in the bottom right hand corner.
+
+Run the following command to use this stage file with `rpicam-hello`:
-`rpicam-hello --post-process-file segmentation_tf.json --lores-width 258 --lores-height 258 --viewfinder-width 1024 --viewfinder-height 1024`
+[source,console]
+----
+$ rpicam-hello --post-process-file segmentation_tf.json --lores-width 258 --lores-height 258 --viewfinder-width 1024 --viewfinder-height 1024
+----
-image::images/segmentation.jpg[Image showing segmentation in the bottom right corner]
+.Running segmentation and displaying the results on a map in the bottom right.
+image::images/segmentation.jpg[Running segmentation and displaying the results on a map in the bottom right]
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_writing.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_writing.adoc
index 8b48d0a660..b010133f37 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_writing.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_post_processing_writing.adoc
@@ -1,55 +1,51 @@
-=== Writing your own Post-Processing Stages
+=== Write your own post-processing stages
-The `rpicam-apps` _post-processing framework_ is not only very flexible but is meant to make it easy for users to create their own custom post-processing stages. It is easy to include algorithms and routines that are already available both in OpenCV and TensorFlow Lite.
+With the `rpicam-apps` post-processing framework, users can create their own custom post-processing stages. You can even include algorithms and routines from OpenCV and TensorFlow Lite.
-We are keen to accept and distribute interesting post-processing stages contributed by our users.
+==== Basic post-processing stages
-==== Basic Post-Processing Stages
+To create your own post-processing stage, derive a new class from the `PostProcessingStage` class.
+All post-processing stages must implement the following member functions:
-Post-processing stages have a simple API, and users can create their own by deriving from the `PostProcessingStage` class. The member functions that must be implemented are listed below, though note that some may be unnecessary for simple stages.
+`char const *Name() const`:: Returns the name of the stage. Matched against stages listed in the JSON post-processing configuration file.
+`void Read(boost::property_tree::ptree const ¶ms)`:: Reads the stage's configuration parameters from a provided JSON file.
+`void AdjustConfig(std::string const &use_case, StreamConfiguration *config)`:: Gives stages a chance to influence the configuration of the camera. Frequently empty for stages with no need to configure the camera.
+`void Configure()`:: Called just after the camera has been configured to allocate resources and check that the stage has access to necessary streams.
+`void Start()`:: Called when the camera starts. Frequently empty for stages with no need to configure the camera.
+`bool Process(CompletedRequest &completed_request)`:: Presents completed camera requests for post-processing. This is where you'll implement pixel manipulations and image analysis. Returns `true` if the post-processing framework should **not** deliver this request to the application.
+`void Stop()`:: Called when the camera stops. Used to shut down any active processing on asynchronous threads.
+`void Teardown()`:: Called when the camera configuration is destroyed. Use this as a deconstructor where you can de-allocate resources set up in the `Configure` method.
-[cols=",^"]
-|===
-| `char const *Name() const` | Return the name of the stage. This is used to match against stages listed in the JSON post-processing configuration file.
-| `void Read(boost::property_tree::ptree const ¶ms)` | This method will read any of the stage's configuration parameters from the JSON file.
-| `void AdjustConfig(std::string const &use_case, StreamConfiguration *config)` | This method gives stages a chance to influence the configuration of the camera, though it is not often necessary to implement it.
-| `void Configure()` | This is called just after the camera has been configured. It is a good moment to check that the stage has access to the streams it needs, and it can also allocate any resources that it may require.
-| `void Start()` | Called when the camera starts. This method is often not required.
-| `bool Process(CompletedRequest &completed_request)` | This method presents completed camera requests for post-processing and is where the necessary pixel manipulations or image analysis will happen. The function returns `true` if the post-processing framework is _not_ to deliver this request on to the application.
-| `void Stop()` | Called when the camera is stopped. Normally a stage would need to shut down any processing that might be running (for example, if it started any asynchronous threads).
-| `void Teardown()` | Called when the camera configuration is torn down. This would typically be used to de-allocate any resources that were set up in the `Configure` method.
-|===
+In any stage implementation, call `RegisterStage` to register your stage with the system.
-Some helpful hints on writing your own stages:
+Don't forget to add your stage to `meson.build` in the post-processing folder.
+When writing your own stages, keep these tips in mind:
-* Generally, the `Process` method should not take too long as it will block the imaging pipeline and may cause stuttering. When time-consuming algorithms need to be run, it may be helpful to delegate them to another asynchronous thread.
+* The `Process` method blocks the imaging pipeline. If it takes too long, the pipeline will stutter. **Always delegate time-consuming algorithms to an asynchronous thread.**
-* When delegating work to another thread, the way image buffers are handled currently means that they will need to be copied. For some applications, such as image analysis, it may be viable to use the "low resolution" image stream rather than full resolution images.
+* When delegating work to another thread, you must copy the image buffers. For applications like image analysis that don't require full resolution, try using a low-resolution image stream.
-* The post-processing framework adds multi-threading parallelism on a per-frame basis. This is helpful in improving throughput if you want to run on every single frame. Some functions may supply parallelism within each frame (such as OpenCV and TFLite). In these cases it would probably be better to serialise the calls so as to suppress the per-frame parallelism.
+* The post-processing framework _uses parallelism to process every frame_. This improves throughput. However, some OpenCV and TensorFlow Lite functions introduce another layer of parallelism _within_ each frame. Consider serialising calls within each frame since post-processing already takes advantage of multiple threads.
-* Most streams, and in particular the low resolution stream, have YUV420 format. These formats are sometimes not ideal for OpenCV or TFLite so there may sometimes need to be a conversion step.
+* Most streams, including the low resolution stream, use the YUV420 format. You may need to convert this to another format for certain OpenCV or TFLite functions.
-* When images need to be altered, doing so in place is much the easiest strategy.
+* For the best performance, always alter images in-place.
-* Implementations of any stage should always include a `RegisterStage` call. This registers your new stage with the system so that it will be correctly identified when listed in a JSON file. You will need to add it to the post-processing folder's `CMakeLists.txt` too, of course.
+For a basic example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/negate_stage.cpp[`negate_stage.cpp`]. This stage negates an image by turning light pixels dark and dark pixels light. This stage is mostly derived class boiler-plate, achieving the negation logic in barely half a dozen lines of code.
-The easiest example to start with is `negate_stage.cpp`, which "negates" an image (turning black white, and vice versa). Aside from a small amount of derived class boiler-plate, it contains barely half a dozen lines of code.
+For another example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/sobel_cv_stage.cpp[`sobel_cv_stage.cpp`], which implements a Sobel filter in just a few lines of OpenCV functions.
-Next up in complexity is `sobel_cv_stage.cpp`. This implements a Sobel filter using just a few lines of OpenCV functions.
+==== TensorFlow Lite stages
-==== TFLite Stages
+For stages that use TensorFlow Lite (TFLite), derive a new class from the `TfStage` class.
+This class delegates model execution to a separate thread to prevent camera stuttering.
-For stages wanting to analyse images using TensorFlowLite we provide the `TfStage` base class. This provides a certain amount of boilerplate code and makes it much easier to implement new TFLite-based stages by deriving from this class. In particular, it delegates the execution of the model to another thread, so that the full camera framerate is still maintained - it is just the model that will run at a lower framerate.
+The `TfStage` class implements all the `PostProcessingStage` member functions post-processing stages must normally implement, _except for_ ``Name``.
+All `TfStage`-derived stages must implement the ``Name`` function, and should implement some or all of the following virtual member functions:
-The `TfStage` class implements all the public `PostProcessingStage` methods that normally have to be redefined, with the exception of the `Name` method which must still be supplied. It then presents the following virtual methods which derived classes should implement instead.
+`void readExtras()`:: The base class reads the named model and certain other parameters like the `refresh_rate`. Use this function this to read extra parameters for the derived stage and check that the loaded model is correct (e.g. has right input and output dimensions).
+`void checkConfiguration()`:: The base class fetches the low resolution stream that TFLite operates on and the full resolution stream in case the derived stage needs it. Use this function to check for the streams required by your stage. If your stage can't access one of the required streams, you might skip processing or throw an error.
+`void interpretOutputs()`:: Use this function to read and interpret the model output. _Runs in the same thread as the model when the model completes_.
+`void applyResults()`:: Use this function to apply results of the model (could be several frames old) to the current frame. Typically involves attaching metadata or drawing. _Runs in the main thread, before frames are delivered_.
-[cols=",^"]
-|===
-| `void readExtras()` | The base class reads the named model and certain other parameters like the `refresh_rate`. This method can be supplied to read any extra parameters for the derived stage. It is also a good place to check that the loaded model looks as expected (i.e. has right input and output dimensions).
-| `void checkConfiguration()` | The base class fetches the low resolution stream which TFLite will operate on, and the full resolution stream in case the derived stage needs it. This method is provided for the derived class to check that the streams it requires are present. In case any required stream is missing, it may elect simply to avoid processing any images, or it may signal a fatal error.
-| `void interpretOutputs()` | The TFLite model runs asynchronously so that it can run "every few frames" without holding up the overall framerate. This method gives the derived stage the chance to read and interpret the model's outputs, running right after the model itself and in that same thread.
-| `void applyResults()` | Here we are running once again in the main thread and so this method should run reasonably quickly so as not to hold up the supply of frames to the application. It is provided so that the last results of the model (which might be a few frames ago) can be applied to the current frame. Typically this would involve attaching metadata to the image, or perhaps drawing something onto the main image.
-|===
-
-For further information, readers are referred to the supplied example code implementing the `ObjectClassifyTfStage` and `PoseEstimationTfStage` classes.
+For an example implementation, see the https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/object_classify_tf_stage.cpp[`object_classify_tf_stage.cpp`] and https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/pose_estimation_tf_stage.cpp[`pose_estimation_tf_stage.cpp`].
diff --git a/documentation/asciidoc/computers/camera/rpicam_apps_writing.adoc b/documentation/asciidoc/computers/camera/rpicam_apps_writing.adoc
index 158e191586..fd5a9217bd 100644
--- a/documentation/asciidoc/computers/camera/rpicam_apps_writing.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_apps_writing.adoc
@@ -1,62 +1,59 @@
-=== Understanding and Writing your own Apps
+=== Write your own `rpicam` apps
-`rpicam-apps` are not supposed to be a full set of all the applications with all the features that anyone could ever need. Instead, they are supposed to be easy to understand, such that users who require slightly different behaviour can implement it for themselves.
+`rpicam-apps` does not provide all of the camera-related features that anyone could ever need. Instead, these applications are small and flexible. Users who require different behaviour can implement it themselves.
-All the applications work by having a simple event loop which receives a message with a new set of frames from the camera system. This set of frames is called a `CompletedRequest`. It contains all the images that have been derived from that single camera frame (so perhaps a low resolution image in addition to the full size output), as well as metadata from the camera system and further metadata from the post-processing system.
+All of the `rpicam-apps` use an event loop that receives messages when a new set of frames arrives from the camera system. This set of frames is called a `CompletedRequest`. The `CompletedRequest` contains:
-==== `rpicam-hello`
-
-`rpicam-hello` is much the easiest application to understand. The only thing it does with the camera images is extract the `CompletedRequestPtr` (a shared pointer to the `CompletedRequest`) from the message:
+* all images derived from that single camera frame: often a low-resolution image and a full-size output
+* metadata from the camera and post-processing systems
-----
- CompletedRequestPtr &completed_request = std::get(msg.payload);
-----
+==== `rpicam-hello`
-and forward it to the preview window:
+`rpicam-hello` is the smallest application, and the best place to start understanding `rpicam-apps` design. It extracts the `CompletedRequestPtr`, a shared pointer to the `CompletedRequest`, from the message, and forwards it to the preview window:
+[cpp]
----
- app.ShowPreview(completed_request, app.ViewfinderStream());
+CompletedRequestPtr &completed_request = std::get(msg.payload);
+app.ShowPreview(completed_request, app.ViewfinderStream());
----
-One important thing to note is that every `CompletedRequest` must be recycled back to the camera system so that the buffers can be reused, otherwise it will simply run out of buffers in which to receive new camera frames. This recycling process happens automatically when all references to the `CompletedRequest` are dropped, using {cpp}'s _shared pointer_ and _custom deleter_ mechanisms.
+Every `CompletedRequest` must be recycled back to the camera system so that the buffers can be reused. Otherwise, the camera runs out of buffers for new camera frames. This recycling process happens automatically when no references to the `CompletedRequest` remain using {cpp}'s _shared pointer_ and _custom deleter_ mechanisms.
-In `rpicam-hello` therefore, two things must happen for the `CompletedRequest` to be returned to the camera.
+As a result, `rpicam-hello` must complete the following actions to recycle the buffer space:
-1. The event loop must go round again so that the message (`msg` in the code), which is holding a reference to the shared pointer, is dropped.
+* The event loop must finish a cycle so the message (`msg` in the code), which holds a reference to `CompletedRequest`, can be replaced with the next message. This discards the reference to the previous message.
-2. The preview thread, which takes another reference to the `CompletedRequest` when `ShowPreview` is called, must be called again with a new `CompletedRequest`, causing the previous one to be dropped.
+* When the event thread calls `ShowPreview`, it passes the preview thread a reference to the `CompletedRequest`. The preview thread discards the last `CompletedRequest` instance each time `ShowPreview` is called.
==== `rpicam-vid`
-`rpicam-vid` is not unlike `rpicam-hello`, but it adds a codec to the event loop and the preview. Before the event loop starts, we must configure that encoder with a callback which says what happens to the buffer containing the encoded image data.
+`rpicam-vid` is similar to `rpicam-hello` with encoding added to the event loop. Before the event loop starts, `rpicam-vid` configures the encoder with a callback. The callback handles the buffer containing the encoded image data. In the code below, we send the buffer to the `Output` object. `Output` could write it to a file or stream it, depending on the options specified.
+[cpp]
----
- app.SetEncodeOutputReadyCallback(std::bind(&Output::OutputReady, output.get(), _1, _2, _3, _4));
+app.SetEncodeOutputReadyCallback(std::bind(&Output::OutputReady, output.get(), _1, _2, _3, _4));
----
-Here we send the buffer to the `Output` object which may write it to a file, or send it over the network, according to our choice when we started the application.
-
-The encoder also takes a new reference to the `CompletedRequest`, so once the event loop, the preview window and the encoder all drop their references, the `CompletedRequest` will be recycled automatically back to the camera system.
+Because this code passes the encoder a reference to the `CompletedRequest`, `rpicam-vid` can't recycle buffer data until the event loop, preview window, _and_ encoder all discard their references.
==== `rpicam-raw`
-`rpicam-raw` is not so very different from `rpicam-vid`. It too uses an encoder, although this time it is a "dummy" encoder called the `NullEncoder`. This just treats the input image directly as the output buffer and is careful not to drop its reference to the input until the output callback has dealt with it first.
+`rpicam-raw` is similar to `rpicam-vid`. It also encodes during the event loop. However, `rpicam-raw` uses a dummy encoder called the `NullEncoder`. This uses the input image as the output buffer instead of encoding it with a codec. `NullEncoder` only discards its reference to the buffer once the output callback completes. This guarantees that the buffer isn't recycled before the callback processes the image.
-This time, however, we do not forward anything to the preview window, though we could have displayed the (processed) video stream if we had wanted.
+`rpicam-raw` doesn't forward anything to the preview window.
-The use of the `NullEncoder` is possibly overkill in this application, as we could probably just send the image straight to the `Output` object. However, it serves to underline the general principle that it is normally a bad idea to do too much work directly in the event loop, and time-consuming processes are often better left to other threads.
+`NullEncoder` is possibly overkill in `rpicam-raw`. We could probably send images straight to the `Output` object, instead. However, `rpicam-apps` need to limit work in the event loop. `NullEncoder` demonstrates how you can handle most processes (even holding onto a reference) in other threads.
==== `rpicam-jpeg`
-We discuss `rpicam-jpeg` rather than `rpicam-still` as the basic idea (that of switching the camera from preview into capture mode) is the same, and `rpicam-jpeg` has far fewer additional options (such as timelapse capture) that serve to distract from the basic function.
-
-`rpicam-jpeg` starts the camera in preview mode in the usual way, but at the appropriate moment stops it and switches to still capture:
+`rpicam-jpeg` starts the camera in preview mode in the usual way. When the timer completes, it stops the preview and switches to still capture:
+[cpp]
----
- app.StopCamera();
- app.Teardown();
- app.ConfigureStill();
- app.StartCamera();
+app.StopCamera();
+app.Teardown();
+app.ConfigureStill();
+app.StartCamera();
----
-Then the event loop will grab the first frame that emerges once it's no longer in preview mode, and saves this as a JPEG.
+The event loop grabs the first frame returned from still mode and saves this as a JPEG.
diff --git a/documentation/asciidoc/computers/camera/rpicam_configuration.adoc b/documentation/asciidoc/computers/camera/rpicam_configuration.adoc
new file mode 100644
index 0000000000..c36db3f69d
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/rpicam_configuration.adoc
@@ -0,0 +1,57 @@
+=== Configuration
+
+Most use cases work automatically with no need to alter the camera configuration. However, some common use cases do require configuration tweaks, including:
+
+* Third-party cameras (the manufacturer's instructions should explain necessary configuration changes, if any)
+
+* Using a non-standard driver or overlay with an official Raspberry Pi camera
+
+Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt`.
+
+|===
+| Camera Module | In `/boot/firmware/config.txt`
+
+| V1 camera (OV5647)
+| `dtoverlay=ov5647`
+
+| V2 camera (IMX219)
+| `dtoverlay=imx219`
+
+| HQ camera (IMX477)
+| `dtoverlay=imx477`
+
+| GS camera (IMX296)
+| `dtoverlay=imx296`
+
+| Camera Module 3 (IMX708)
+| `dtoverlay=imx708`
+
+| IMX290 and IMX327
+| `dtoverlay=imx290,clock-frequency=74250000` or `dtoverlay=imx290,clock-frequency=37125000` (both modules share the imx290 kernel driver; refer to instructions from the module vendor for the correct frequency)
+
+| IMX378
+| `dtoverlay=imx378`
+
+| OV9281
+| `dtoverlay=ov9281`
+|===
+
+To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes.
+
+If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify the use of camera connector 0 by adding `,cam0` to the `dtoverlay` that you used from the table above. If you do not add this, it will default to checking camera connector 1. Note that for official Raspberry Pi camera modules connected to SBCs (not Compute Modules), auto-detection will correctly identify all the cameras connected to your device.
+
+[[tuning-files]]
+==== Tweak camera behaviour with tuning files
+
+Raspberry Pi's `libcamera` implementation includes a **tuning file** for each camera. This file controls algorithms and hardware to produce the best image quality. `libcamera` can only determine the sensor in use, not the module. As a result, some modules require a tuning file override. Use the xref:camera_software.adoc#tuning-file[`tuning-file`] option to specify an override. You can also copy and alter existing tuning files to customise camera behaviour.
+
+For example, the no-IR-filter (NoIR) versions of sensors use Auto White Balance (AWB) settings different from the standard versions. On a Raspberry Pi 5 or later, you can specify the the NoIR tuning file for the IMX219 sensor with the following command:
+
+[source,console]
+----
+$ rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json
+----
+
+NOTE: Raspberry Pi models prior to Raspberry Pi 5 use different tuning files. On those devices, use the files stored in `/usr/share/libcamera/ipa/rpi/vc4/` instead.
+
+`libcamera` maintains tuning files for a number of cameras, including third-party models. For instance, you can find the tuning file for the Soho Enterprises SE327M12 in `se327m12.json`.
diff --git a/documentation/asciidoc/computers/camera/rpicam_detect.adoc b/documentation/asciidoc/computers/camera/rpicam_detect.adoc
index 50e068cb0c..e75a4a630f 100644
--- a/documentation/asciidoc/computers/camera/rpicam_detect.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_detect.adoc
@@ -1,22 +1,14 @@
=== `rpicam-detect`
-`rpicam-detect` is not supplied by default in any Raspberry Pi OS distribution, but can be built by users who have xref:camera_software.adoc#post-processing-with-tensorflow-lite[installed TensorFlow Lite]. In this case, please refer to the xref:camera_software.adoc#building-libcamera-and-rpicam-apps[`rpicam-apps` build instructions]. You will need to run `cmake` with `-DENABLE_TFLITE=1`.
+NOTE: Raspberry Pi OS does not include `rpicam-detect`. However, you can build `rpicam-detect` if you have xref:camera_software.adoc#post-processing-with-tensorflow-lite[installed TensorFlow Lite]. For more information, see the xref:camera_software.adoc#build-libcamera-and-rpicam-apps[`rpicam-apps` build instructions]. Don't forget to pass `-Denable_tflite=enabled` when you run `meson`.
-This application runs a preview window and monitors the contents using a Google MobileNet v1 SSD (Single Shot Detector) neural network that has been trained to identify about 80 classes of objects using the Coco dataset. It should recognise people, cars, cats and many other objects.
+`rpicam-detect` displays a preview window and monitors the contents using a Google MobileNet v1 SSD (Single Shot Detector) neural network trained to identify about 80 classes of objects using the Coco dataset. `rpicam-detect` recognises people, cars, cats and many other objects.
-Its starts by running a preview window, and whenever the target object is detected it will perform a full resolution JPEG capture, before returning back to the preview mode to continue monitoring. It provides a couple of additional command line options that do not apply elsewhere:
+Whenever `rpicam-detect` detects a target object, it captures a full-resolution JPEG. Then it returns to monitoring preview mode.
-`--object `
+See the xref:camera_software.adoc#object_detect_tf-stage[TensorFlow Lite object detector] section for general information on model usage. For example, you might spy secretly on your cats while you are away with:
-Detect objects with the given ``. The name should be taken from the model's label file.
-
-`--gap `
-
-Wait at least this many frames after a capture before performing another. This is necessary because the neural network does not run on every frame, so it is best to give it a few frames to run again before considering another capture.
-
-Please refer to the xref:camera_software.adoc#object_detect_tf-stage[TensorFlow Lite object detector] section for more general information on how to obtain and use this model. But as an example, you might spy secretly on your cats while you are away with:
-
-[,bash]
+[source,console]
----
-rpicam-detect -t 0 -o cat%04d.jpg --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --object cat
+$ rpicam-detect -t 0 -o cat%04d.jpg --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --object cat
----
diff --git a/documentation/asciidoc/computers/camera/rpicam_hello.adoc b/documentation/asciidoc/computers/camera/rpicam_hello.adoc
index 9ad569395e..de7dae16f9 100644
--- a/documentation/asciidoc/computers/camera/rpicam_hello.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_hello.adoc
@@ -1,77 +1,41 @@
=== `rpicam-hello`
-`rpicam-hello` is the equivalent of a "hello world" application for the camera. It starts the camera, displays a preview window, and does nothing else. For example
+`rpicam-hello` briefly displays a preview window containing the video feed from a connected camera. To use `rpicam-hello` to display a preview window for five seconds, run the following command in a terminal:
-[,bash]
+[source,console]
----
-rpicam-hello
+$ rpicam-hello
----
-should display a preview window for about 5 seconds. The `-t ` option lets the user select how long the window is displayed, where `` is given in milliseconds. To run the preview indefinitely, use:
-[,bash]
+You can pass an optional duration (in milliseconds) with the xref:camera_software.adoc#timeout[`timeout`] option. A value of `0` runs the preview indefinitely:
+
+[source,console]
----
-rpicam-hello -t 0
+$ rpicam-hello --timeout 0
----
-The preview can be halted either by clicking the window's close button, or using `Ctrl-C` in the terminal.
-
-==== Options
-
-`rpicam-apps` uses a 3rd party library to interpret command line options. This includes _long form_ options where the option name consists of more than one character preceded by `--`, and _short form_ options which can only be a single character preceded by a single `-`. For the most part option names are chosen to match those used by the legacy `raspicam` applications with the exception that we can no longer handle multi-character option names with a single `-`. Any such legacy options have been dropped and the long form with `--` must be used instead.
-
-The options are classified broadly into 3 groups, namely those that are common, those that are specific to still images, and those that are for video encoding. They are supported in an identical manner across all the applications where they apply.
-
-Please refer to the xref:camera_software.adoc#common-command-line-options[command line options documentation] for a complete list.
+Use `Ctrl+C` in the terminal or the close button on the preview window to stop `rpicam-hello`.
-==== The Tuning File
+==== Display an image sensor preview
-Raspberry Pi's `libcamera` implementation includes a _tuning file_ for each different type of camera module. This is a file that describes or "tunes" the parameters that will be passed to the algorithms and hardware to produce the best image quality. `libcamera` is only able to determine automatically the image sensor being used, not the module as a whole - even though the whole module affects the "tuning".
-
-For this reason it is sometimes necessary to override the default tuning file for a particular sensor.
-
-For example, the NOIR (no IR-filter) versions of sensors require different AWB settings to the standard versions, so the IMX219 NOIR being used with a Pi 4 or earlier device should be run using
-
-[,bash]
-----
-rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/imx219_noir.json
-----
+Most of the `rpicam-apps` display a preview image in a window. If there is no active desktop environment, the preview draws directly to the display using the Linux Direct Rendering Manager (DRM). Otherwise, `rpicam-apps` attempt to use the desktop environment. Both paths use zero-copy GPU buffer sharing: as a result, X forwarding is _not_ supported.
-Pi 5 (or later devices) use a different tuning file in a different folder, so here you would use
+If you run the X window server and want to use X forwarding, pass the xref:camera_software.adoc#qt-preview[`qt-preview`] flag to render the preview window in a https://en.wikipedia.org/wiki/Qt_(software)[Qt] window. The Qt preview window uses more resources than the alternatives.
-[,bash]
-----
-rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json
-----
+NOTE: Older systems using Gtk2 may, when linked with OpenCV, produce `Glib-GObject` errors and fail to show the Qt preview window. In this case edit the file `/etc/xdg/qt5ct/qt5ct.conf` as root and replace the line containing `style=gtk2` with `style=gtk3`.
-If you are using a Soho Enterprises SE327M12 module with a Pi 4 you would use
+To suppress the preview window entirely, pass the xref:camera_software.adoc#nopreview[`nopreview`] flag:
-[,bash]
+[source,console]
----
-rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/se327m12.json
+$ rpicam-hello -n
----
-Notice how this also means that users can copy an existing tuning file and alter it according to their own preferences, so long as the `--tuning-file` parameter is pointed to the new version.
-
-Finally, the `--tuning-file` parameter, in common with other `rpicam-hello` command line options, applies identically across all the `rpicam-apps`.
-
-==== Preview Window
-
-Most of the `rpicam-apps` display a preview image in a window. If there is no active desktop environment, it will draw directly to the display using Linux DRM (Direct Rendering Manager), otherwise it will attempt to use the desktop environment. Both paths use zero-copy buffer sharing with the GPU, and a consequence of this is that X forwarding is _not_ supported.
+The xref:camera_software.adoc#info-text[`info-text`] option displays image information on the window title bar using `%` directives. For example, the following command displays the current red and blue gain values:
-For this reason there is a third kind of preview window which does support X forwarding, and can be requested with the `--qt-preview` option. This implementation does not benefit from zero-copy buffer sharing nor from 3D acceleration which makes it computationally expensive (especially for large previews), and so is not normally recommended.
-
-NOTE: Older systems using Gtk2 may, when linked with OpenCV, produce `Glib-GObject` errors and fail to show the Qt preview window. In this case please (as root) edit the file `/etc/xdg/qt5ct/qt5ct.conf` and replace the line containing `style=gtk2` with `style=gtk3`.
-
-The preview window can be suppressed entirely with the `-n` (`--nopreview`) option.
-
-The `--info-text` option allows the user to request that certain helpful image information is displayed on the window title bar using "% directives". For example
-
-[,bash]
+[source,console]
----
-rpicam-hello --info-text "red gain %rg, blue gain %bg"
+$ rpicam-hello --info-text "red gain %rg, blue gain %bg"
----
-will display the current red and blue gain values.
-
-For the HQ camera, use `--info-text "%focus"` to display the focus measure, which will be helpful for focusing the lens.
-A full description of the `--info-text` parameter is given in the xref:camera_software.adoc#common-command-line-options[command line options documentation].
+For a full list of directives, see the xref:camera_software.adoc#info-text[`info-text` reference].
diff --git a/documentation/asciidoc/computers/camera/rpicam_jpeg.adoc b/documentation/asciidoc/computers/camera/rpicam_jpeg.adoc
index 64dab71b80..2531487284 100644
--- a/documentation/asciidoc/computers/camera/rpicam_jpeg.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_jpeg.adoc
@@ -1,48 +1,19 @@
=== `rpicam-jpeg`
-`rpicam-jpeg` is a simple still image capture application. It deliberately avoids some of the additional features of `rpicam-still` which attempts to emulate `raspistill` more fully. As such the code is significantly easier to understand, and in practice still provides many of the same features.
+`rpicam-jpeg` helps you capture images on Raspberry Pi devices.
-To capture a full resolution JPEG image use
+To capture a full resolution JPEG image and save it to a file named `test.jpg`, run the following command:
-[,bash]
+[source,console]
----
-rpicam-jpeg -o test.jpg
+$ rpicam-jpeg --output test.jpg
----
-which will display a preview for about 5 seconds, and then capture a full resolution JPEG image to the file `test.jpg`.
-The `-t ` option can be used to alter the length of time the preview shows, and the `--width` and `--height` options will change the resolution of the captured still image. For example
+You should see a preview window for five seconds. Then, `rpicam-jpeg` captures a full resolution JPEG image and saves it.
-[,bash]
-----
-rpicam-jpeg -o test.jpg -t 2000 --width 640 --height 480
-----
-will capture a VGA sized image.
-
-==== Exposure Control
-
-All the `rpicam-apps` allow the user to run the camera with fixed shutter speed and gain. For example
+Use the xref:camera_software.adoc#timeout[`timeout`] option to alter display time of the preview window. The xref:camera_software.adoc#width-and-height[`width` and `height`] options change the resolution of the saved image. For example, the following command displays the preview window for 2 seconds, then captures and saves an image with a resolution of 640×480 pixels:
-[,bash]
+[source,console]
----
-rpicam-jpeg -o test.jpg -t 2000 --shutter 20000 --gain 1.5
-----
-would capture an image with an exposure of 20ms and a gain of 1.5x. Note that the gain will be applied as _analogue gain_ within the sensor up until it reaches the maximum analogue gain permitted by the kernel sensor driver, after which the remainder will be applied as digital gain.
-
-Raspberry Pi's AEC/AGC algorithm allows applications to specify _exposure compensation_, that is, the ability to make images darker or brighter by a given number of _stops_, as follows
-
-[,bash]
+$ rpicam-jpeg --output test.jpg --timeout 2000 --width 640 --height 480
----
-rpicam-jpeg --ev -0.5 -o darker.jpg
-rpicam-jpeg --ev 0 -o normal.jpg
-rpicam-jpeg --ev 0.5 -o brighter.jpg
-----
-
-===== Further remarks on Digital Gain
-
-Digital gain is applied by the ISP (the Image Signal Processor), not by the sensor. The digital gain will always be very close to 1.0 unless:
-
-* The total gain requested (either by the `--gain` option, or by the exposure profile in the camera tuning) exceeds that which can be applied as analogue gain within the sensor. Only the extra gain required will be applied as digital gain.
-
-* One of the colour gains is less than 1 (note that colour gains are applied as digital gain too). In this case the advertised digital gain will settle to 1 / min(red_gain, blue_gain). This actually means that one of the colour channels - just not the green one - is having unity digital gain applied to it.
-
-* The AEC/AGC is changing. When the AEC/AGC is moving the digital gain will typically vary to some extent to try and smooth out any fluctuations, but it will quickly settle back to its "normal" value.
diff --git a/documentation/asciidoc/computers/camera/rpicam_options_common.adoc b/documentation/asciidoc/computers/camera/rpicam_options_common.adoc
index 809bc79327..1f9f64b397 100644
--- a/documentation/asciidoc/computers/camera/rpicam_options_common.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_options_common.adoc
@@ -1,29 +1,39 @@
-=== Common Command Line Options
+== `rpicam-apps` options reference
-The following options apply across all the `rpicam-apps` with similar or identical semantics, unless noted otherwise.
+=== Common options
-----
- --help, -h Print help information for the application
-----
+The following options apply across all the `rpicam-apps` with similar or identical semantics, unless otherwise noted.
-The `--help` option causes every application to print its full set of command line options with a brief synopsis of each, and then quit.
+To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes.
-----
- --version Print out a software version number
-----
+Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability.
+
+==== `help`
+
+Alias: `-h`
+
+Prints the full set of options, along with a brief synopsis of each option. Does not accept a value.
+
+==== `version`
-All `rpicam-apps` will, when they see the `--version` option, print out a version string both for `libcamera` and `rpicam-apps` and then quit, for example:
+Prints out version strings for `libcamera` and `rpicam-apps`. Does not accept a value.
+
+Example output:
----
rpicam-apps build: ca559f46a97a 27-09-2021 (14:10:24)
libcamera build: v0.0.0+3058-c29143f7
----
-----
- --list-cameras List the cameras available for use
-----
+==== `list-cameras`
+
+Lists the detected cameras attached to your Raspberry Pi and their available sensor modes. Does not accept a value.
-The `--list-cameras` will display the available cameras attached to the board that can be used by the application. This option also lists the sensor modes supported by each camera. For example:
+Sensor mode identifiers have the following form: `S_ : `
+
+Crop is specified in native sensor pixels (even in pixel binning mode) as `(, )/×`. `(x, y)` specifies the location of the crop window of size `width × height` in the sensor array.
+
+For example, the following output displays information about an `IMX219` sensor at index 0 and an `IMX477` sensor at index 1:
----
Available cameras
@@ -44,514 +54,520 @@ Available cameras
4056x3040 [10.00 fps - (0, 0)/4056x3040 crop]
----
-In the above example, the IMX219 sensor is available at index 0 and IMX477 at index 1. The sensor mode identifier takes the following form:
-----
-S_ :
-----
-For the IMX219 in the above example, all modes have a `RGGB` Bayer ordering and provide either 8-bit or 10-bit CSI2 packed readout at the listed resolutions. The crop is specified as (, )/x, where (x, y) is the location of the crop window of size Width x Height in the sensor array. The units remain native sensor pixels, even if the sensor is being used in a binning or skipping mode.
+For the IMX219 sensor in the above example:
-----
- --camera Selects which camera to use
-----
+* all modes have an `RGGB` Bayer ordering
+* all modes provide either 8-bit or 10-bit CSI2 packed readout at the listed resolutions
-The `--camera` option will select which camera to use from the supplied value. The value can be obtained from the `--list-cameras` option.
+==== `camera`
-----
- --config, -c Read options from the given file
-----
+Selects the camera to use. Specify an index from the xref:camera_software.adoc#list-cameras[list of available cameras].
-Normally options are read from the command line, but in case multiple options are required it may be more convenient to keep them in a file.
+==== `config`
-Example: `rpicam-hello -c config.txt`
+Alias: `-c`
-This is a text file containing individual lines of `key=value` pairs, for example:
+Specify a file containing CLI options and values. Consider a file named `example_configuration.txt` that contains the following text, specifying options and values as key-value pairs, one option per line, long (non-alias) option names only:
----
timeout=99000
verbose=
----
-Note how the `=` is required even for implicit options, and that the `--` used on the command line are omitted. Only long form options are permitted (`t=99000` would not be accepted).
+TIP: Omit the leading `--` that you normally pass on the command line. For flags that lack a value, such as `verbose` in the above example, you must include a trailing `=`.
+You could then run the following command to specify a timeout of 99000 milliseconds and verbose output:
+
+[source,console]
----
- --timeout, -t Delay before application stops automatically
+$ rpicam-hello --config example_configuration.txt
----
-The `--timeout` option specifies how long the application runs before it stops, whether it is recording a video or showing a preview. In the case of still image capture, the application will show the preview window for this long before capturing the output image.
+==== `timeout`
-If unspecified, the default value is 5000 (5 seconds). The value zero causes the application to run indefinitely.
+Alias: `-t`
-Example: `rpicam-hello -t 0`
+Default value: 5000 milliseconds (5 seconds)
-==== Preview window
+Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of:
-----
- --preview, -p Preview window settings
-----
+* `min` - minutes
+* `s` or `sec` - seconds
+* `ms` - milliseconds (the default if no suffix used)
+* `us` - microseconds
+* `ns` - nanoseconds.
+
+This time applies to both video recording and preview windows. When capturing a still image, the application shows a preview window for the length of time specified by the `timeout` parameter before capturing the output image.
-Sets the size and location of the preview window (both desktop and DRM versions). It does not affect the resolution or aspect ratio of images being requested from the camera. The camera images will be scaled to the size of the preview window for display, and will be pillar/letter-boxed to fit.
+To run the application indefinitely, specify a value of `0`. Floating point values are also permitted.
-Example: `rpicam-hello -p 100,100,500,500`
+Example: `rpicam-hello -t 0.5min` would run for 30 seconds.
+
+==== `preview`
+
+Alias: `-p`
+
+Sets the location (x,y coordinates) and size (w,h dimensions) of the desktop or DRM preview window. Does not affect the resolution or aspect ratio of images requested from the camera. Scales image size and pillar or letterboxes image aspect ratio to fit within the preview window.
+
+Pass the preview window dimensions in the following comma-separated form: `x,y,w,h`
+
+Example: `rpicam-hello --preview 100,100,500,500`
image::images/preview_window.jpg[Letterboxed preview image]
-----
- --fullscreen, -f Fullscreen preview mode
-----
+==== `fullscreen`
-Forces the preview window to use the whole screen, and the window will have no border or title bar. Again the image may be pillar/letter-boxed.
+Alias: `-f`
-Example `rpicam-still -f -o test.jpg`
+Forces the preview window to use the entire screen with no border or title bar. Scales image size and pillar or letterboxes image aspect ratio to fit within the entire screen. Does not accept a value.
-----
- --qt-preview Use Qt-based preview window
-----
+==== `qt-preview`
-The preview window is switched to use the Qt-based implementation. This option is not normally recommended because it no longer uses zero-copy buffer sharing nor GPU acceleration and is therefore very expensive, however, it does support X forwarding (which the other preview implementations do not).
+Uses the Qt preview window, which consumes more resources than the alternatives, but supports X window forwarding. Incompatible with the xref:camera_software.adoc#fullscreen[`fullscreen`] flag. Does not accept a value.
-The Qt preview window does not support the `--fullscreen` option. Generally it is advised to try and keep the preview window small.
+==== `nopreview`
-Example `rpicam-hello --qt-preview`
+Alias: `-n`
-----
- --nopreview, -n Do not display a preview window
-----
+Causes the application to _not_ display a preview window at all. Does not accept a value.
-The preview window is suppressed entirely.
-Example `rpicam-still -n -o test.jpg`
+==== `info-text`
-----
- --info-text Set window title bar text
-----
+Default value: `"#%frame (%fps fps) exp %exp ag %ag dg %dg"`
-The supplied string is set as the title of the preview window (when running on a desktop environment). Additionally the string may contain a number of `%` directives which are substituted with information from the image metadata. The permitted directives are
+Sets the supplied string as the title of the preview window when running in a desktop environment. Supports the following image metadata substitutions:
|===
| Directive | Substitution
-| %frame
-| The sequence number of the frame
+| `%frame`
+| Sequence number of the frame.
-| %fps
-| The instantaneous frame rate
+| `%fps`
+| Instantaneous frame rate.
-| %exp
-| The shutter speed used to capture the image, in microseconds
+| `%exp`
+| Shutter speed used to capture the image, in microseconds.
-| %ag
-| The analogue gain applied to the image in the sensor
+| `%ag`
+| Analogue gain applied to the image in the sensor.
-| %dg
-| The digital gain applied to the image by the ISP
+| `%dg`
+| Digital gain applied to the image by the ISP.
-| %rg
-| The gain applied to the red component of each pixel
+| `%rg`
+| Gain applied to the red component of each pixel.
-| %bg
-| The gain applied to the blue component of each pixel
+| `%bg`
+| Gain applied to the blue component of each pixel.
-| %focus
-| The focus metric for the image, where a larger value implies a sharper image
+| `%focus`
+| Focus metric for the image, where a larger value implies a sharper image.
-| %lp
-| The current lens position in dioptres (1 / distance in metres).
+| `%lp`
+| Current lens position in dioptres (1 / distance in metres).
-| %afstate
-| The autofocus algorithm state (one of `idle`, `scanning`, `focused` or `failed`).
+| `%afstate`
+| Autofocus algorithm state (`idle`, `scanning`, `focused` or `failed`).
|===
-When not provided, the `--info-text` string defaults to `"#%frame (%fps fps) exp %exp ag %ag dg %dg"`.
+image::images/focus.jpg[Image showing focus measure]
-Example: `rpicam-hello --info-text "Focus measure: %focus"`
+==== `width` and `height`
-image::images/focus.jpg[Image showing focus measure]
+Each accepts a single number defining the dimensions, in pixels, of the captured image.
-==== Camera Resolution and Readout
+For `rpicam-still`, `rpicam-jpeg` and `rpicam-vid`, specifies output resolution.
-----
- --width Capture image width
- --height Capture image height
-----
+For `rpicam-raw`, specifies raw frame resolution. For cameras with a 2×2 binned readout mode, specifying a resolution equal to or smaller than the binned mode captures 2×2 binned raw frames.
+
+For `rpicam-hello`, has no effect.
+
+Examples:
+
+* `rpicam-vid -o test.h264 --width 1920 --height 1080` captures 1080p video.
+
+* `rpicam-still -r -o test.jpg --width 2028 --height 1520` captures a 2028×1520 resolution JPEG. If used with the HQ camera, uses 2×2 binned mode, so the raw file (`test.dng`) contains a 2028×1520 raw Bayer image.
+
+==== `viewfinder-width` and `viewfinder-height`
+
+Each accepts a single number defining the dimensions, in pixels, of the image displayed in the preview window. Does not effect the preview window dimensions, since images are resized to fit. Does not affect captured still images or videos.
-These numbers specify the output resolution of the camera images captured by `rpicam-still`, `rpicam-jpeg` and `rpicam-vid`.
+==== `mode`
-For `rpicam-raw`, it affects the size of the raw frames captured. Where a camera has a 2x2 binned readout mode, specifying a resolution not larger than this binned mode will result in the capture of 2x2 binned raw frames.
+Allows you to specify a camera mode in the following colon-separated format: `:::`. The system selects the closest available option for the sensor if there is not an exact match for a provided value. You can use the packed (`P`) or unpacked (`U`) packing formats. Impacts the format of stored videos and stills, but not the format of frames passed to the preview window.
-For `rpicam-hello` these parameters have no effect.
+Bit-depth and packing are optional.
+Bit-depth defaults to 12.
+Packing defaults to `P` (packed).
+
+For information about the bit-depth, resolution, and packing options available for your sensor, see xref:camera_software.adoc#list-cameras[`list-cameras`].
Examples:
-`rpicam-vid -o test.h264 --width 1920 --height 1080` will capture 1080p video.
+* `4056:3040:12:P` - 4056×3040 resolution, 12 bits per pixel, packed.
+* `1632:1224:10` - 1632×1224 resolution, 10 bits per pixel.
+* `2592:1944:10:U` - 2592×1944 resolution, 10 bits per pixel, unpacked.
+* `3264:2448` - 3264×2448 resolution.
-`rpicam-still -r -o test.jpg --width 2028 --height 1520` will capture a 2028x1520 resolution JPEG. When using the HQ camera the sensor will be driven in its 2x2 binned mode so the raw file - captured in `test.dng` - will contain a 2028x1520 raw Bayer image.
+===== Packed format details
-----
- --viewfinder-width Capture image width
- --viewfinder-height Capture image height
-----
+The packed format uses less storage for pixel data.
-These options affect only the preview (meaning both `rpicam-hello` and the preview phase of `rpicam-jpeg` and `rpicam-still`), and specify the image size that will be requested from the camera for the preview window. They have no effect on captured still images or videos. Nor do they affect the preview window as the images are resized to fit.
+_On Raspberry Pi 4 and earlier devices_, the packed format packs pixels using the MIPI CSI-2 standard. This means:
-Example: `rpicam-hello --viewfinder-width 640 --viewfinder-height 480`
+* 10-bit camera modes pack 4 pixels into 5 bytes. The first 4 bytes contain the 8 most significant bits (MSBs) of each pixel, and the final byte contains the 4 pairs of least significant bits (LSBs).
+* 12-bit camera modes pack 2 pixels into 3 bytes. The first 2 bytes contain the 8 most significant bits (MSBs) of each pixel, and the final byte contains the 4 least significant bits (LSBs) of both pixels.
-----
- --rawfull Force sensor to capture in full resolution mode
-----
+_On Raspberry Pi 5 and later devices_, the packed format compresses pixel values with a visually lossless compression scheme into 8 bits (1 byte) per pixel.
-This option forces the sensor to be driven in its full resolution readout mode for still and video capture, irrespective of the requested output resolution (given by `--width` and `--height`). It has no effect for `rpicam-hello`.
+===== Unpacked format details
-Using this option often incurs a frame rate penalty, as larger resolution frames are slower to read out.
+The unpacked format provides pixel values that are much easier to manually manipulate, at the expense of using more storage for pixel data.
-Example: `rpicam-raw -t 2000 --segment 1 --rawfull -o test%03d.raw` will cause multiple full resolution raw frames to be captured. On the HQ camera each frame will be about 18MB in size. Without the `--rawfull` option the default video output resolution would have caused the 2x2 binned mode to be selected, resulting in 4.5MB raw frames.
+On all devices, the unpacked format uses 2 bytes per pixel.
-----
- --mode Specify sensor mode, given as :::
-----
+_On Raspberry Pi 4 and earlier devices_, applications apply zero padding at the *most significant end*. In the unpacked format, a pixel from a 10-bit camera mode cannot exceed the value 1023.
-This option is more general than `--rawfull` and allows the precise selection of one of the camera modes. The mode should be specified by giving its width, height, bit-depth and packing, separated by colons. These numbers do not have to be exact as the system will select the closest it can find. Moreover, the bit-depth and packing are optional (defaulting to 12 and `P` for "packed" respectively). For example:
+_On Raspberry Pi 5 and later devices_, applications apply zero padding at the *least significant end*, so images use the full 16-bit dynamic range of the pixel depth delivered by the sensor.
-* `4056:3040:12:P` - 4056x3040 resolution, 12 bits per pixel, packed. This means that raw image buffers will be packed so that 2 pixel values occupy only 3 bytes.
-* `1632:1224:10` - 1632x1224 resolution, 10 bits per pixel. It will default to "packed". A 10-bit packed mode would store 4 pixels in every 5 bytes.
-* `2592:1944:10:U` - 2592x1944 resolution, 10 bits per pixel, unpacked. An unpacked format will store every pixel in 2 bytes, in this case with the top 6 bits of each value being zero.
-* `3264:2448` - 3264x2448 resolution. It will try to select the default 12-bit mode but in the case of the v2 camera there isn't one, so a 10-bit mode would be chosen instead.
+==== `viewfinder-mode`
-The `--mode` option affects the mode choice for video recording and stills capture. To control the mode choice during the preview phase prior to stills capture, please use the `--viewfinder-mode` option.
+Identical to the `mode` option, but it applies to the data passed to the preview window. For more information, see the xref:camera_software.adoc#mode[`mode` documentation].
-----
- --viewfinder-mode Specify sensor mode, given as :::
-----
+==== `lores-width` and `lores-height`
-This option is identical to the `--mode` option except that it applies only during the preview phase of stills capture (also used by the `rpicam-hello` application).
+Delivers a second, lower-resolution image stream from the camera, scaled down to the specified dimensions.
-----
- --lores-width Low resolution image width
- --lores-height Low resolution image height
-----
+Each accepts a single number defining the dimensions, in pixels, of the lower-resolution stream.
-`libcamera` allows the possibility of delivering a second lower resolution image stream from the camera system to the application. This stream is available in both the preview and the video modes (i.e. `rpicam-hello` and the preview phase of `rpicam-still`, and `rpicam-vid`), and can be used, among other things, for image analysis. For stills captures, the low resolution image stream is not available.
+Available for preview and video modes. Not available for still captures. If you specify a aspect ratio different from the normal resolution stream, generates non-square pixels.
-The low resolution stream has the same field of view as the other image streams. If a different aspect ratio is specified for the low resolution stream, then those images will be squashed so that the pixels are no longer square.
+For `rpicam-vid`, disables extra colour-denoise processing.
-During video recording (`rpicam-vid`), specifying a low resolution stream will disable some extra colour denoise processing that would normally occur.
-Example: `rpicam-hello --lores-width 224 --lores-height 224`
+Useful for image analysis when combined with xref:camera_software.adoc#post-processing-with-rpicam-apps[image post-processing].
-Note that the low resolution stream is not particularly useful unless used in conjunction with xref:camera_software.adoc#post-processing[image post-processing].
+==== `hflip`
-----
- --hflip Read out with horizontal mirror
- --vflip Read out with vertical flip
- --rotation Use hflip and vflip to create the given rotation
-----
+Flips the image horizontally. Does not accept a value.
-These options affect the order of read-out from the sensor, and can be used to mirror the image horizontally, and/or flip it vertically. The `--rotation` option permits only the value 0 or 180, so note that 90 or 270 degree rotations are not supported. Moreover, `--rotation 180` is identical to `--hflip --vflip`.
+==== `vflip`
-Example: `rpicam-hello --vflip --hflip`
+Flips the image vertically. Does not accept a value.
-----
- --roi Select a crop (region of interest) from the camera
-----
+==== `rotation`
-The `--roi` (region of interest) option allows the user to select a particular crop from the full field of view provided by the sensor. The coordinates are specified as a proportion of the available field of view, so that `--roi 0,0,1,1` would have no effect at all.
+Rotates the image extracted from the sensor. Accepts only the values 0 or 180.
-The `--roi` parameter implements what is commonly referred to as "digital zoom".
+==== `roi`
-Example `rpicam-hello --roi 0.25,0.25,0.5,0.5` will select exactly a quarter of the total number of pixels cropped from the centre of the image.
+Crops the image extracted from the full field of the sensor. Accepts four decimal values, _ranged 0 to 1_, in the following format: `,,,h>`. Each of these values represents a percentage of the available width and heights as a decimal between 0 and 1.
-----
- --hdr Run the camera in HDR mode
-----
+These values define the following proportions:
-The `--hdr` option causes the camera to be run in the HDR (High Dynamic Range) mode given by ``. On Pi 4 and earlier devices, this option only works for certain supported cameras, including the _Raspberry Pi Camera Module 3_, and on Pi 5 devices it can be used with all cameras. `` may take the following values:
+* ``: X coordinates to skip before extracting an image
+* ``: Y coordinates to skip before extracting an image
+* ``: image width to extract
+* ``: image height to extract
-* `off` - HDR is disabled. This is the default value if the `--hdr` option is omitted entirely.
-* `auto` - If the sensor supports HDR, then the on-sensor HDR mode is enabled. Otherwise, on Pi 5 devices, the Pi 5's on-chip HDR mode will be enabled. On a Pi 4 or earlier device, HDR will be disabled if the sensor does not support it. This mode will be applied if the `--hdr` option is supplied without a `` value.
-* `single-exp` - On a Pi 5, the on-chip HDR mode will be enabled, even if the sensor itself supports HDR. On earlier devices, HDR (even on-sensor HDR) will be disabled.
+Defaults to `0,0,1,1` (starts at the first X coordinate and the first Y coordinate, uses 100% of the image width, uses 100% of the image height).
-Example: `rpicam-still --hdr -o hdr.jpg` for capturing a still image, or `rpicam-vid --hdr -o hdr.h264` to capture a video.
+Examples:
-When sensors support on-sensor HDR, use of the that option may generally cause different camera modes to be available, and this can be checked by comparing the output of `rpicam-hello --list-cameras` with `rpicam-hello --hdr sensor --list-cameras`.
+* `rpicam-hello --roi 0.25,0.25,0.5,0.5` selects exactly a half of the total number of pixels cropped from the centre of the image (skips the first 25% of X coordinates, skips the first 25% of Y coordinates, uses 50% of the total image width, uses 50% of the total image height).
+* `rpicam-hello --roi 0,0,0.25,0.25` selects exactly a quarter of the total number of pixels cropped from the top left of the image (skips the first 0% of X coordinates, skips the first 0% of Y coordinates, uses 25% of the image width, uses 25% of the image height).
-NOTE: For the _Raspberry Pi Camera Module 3_, the non-HDR modes include the usual full resolution (12MP) mode as well as its half resolution 2x2 binned (3MP) equivalent. In the case of HDR, only a single half resolution (3MP) mode is available, and it is not possible to switch between HDR and non-HDR modes without restarting the camera application.
+==== `hdr`
-==== Camera Control
+Default value: `off`
-The following options affect the image processing and control algorithms that affect the camera image quality.
+Runs the camera in HDR mode. If passed without a value, assumes `auto`. Accepts one of the following values:
-----
- --sharpness Set image sharpness
-----
+* `off` - Disables HDR.
+* `auto` - Enables HDR on supported devices. Uses the sensor's built-in HDR mode if available. If the sensor lacks a built-in HDR mode, uses on-board HDR mode, if available.
+* `single-exp` - Uses on-board HDR mode, if available, even if the sensor has a built-in HDR mode. If on-board HDR mode is not available, disables HDR.
-The given `` adjusts the image sharpness. The value zero means that no sharpening is applied, the value 1.0 uses the default amount of sharpening, and values greater than 1.0 use extra sharpening.
+Raspberry Pi 5 and later devices have an on-board HDR mode.
-Example: `rpicam-still -o test.jpg --sharpness 2.0`
+To check for built-in HDR modes in a sensor, pass this option in addition to xref:camera_software.adoc#list-cameras[`list-cameras`].
-----
- --contrast Set image contrast
-----
+=== Camera control options
-The given `` adjusts the image contrast. The value zero produces minimum contrast, the value 1.0 uses the default amount of contrast, and values greater than 1.0 apply extra contrast.
+The following options control image processing and algorithms that affect camera image quality.
-Example: `rpicam-still -o test.jpg --contrast 1.5`
+==== `sharpness`
-----
- --brightness Set image brightness
-----
+Sets image sharpness. Accepts a numeric value along the following spectrum:
-The given `` adjusts the image brightness. The value -1.0 produces an (almost) black image, the value 1.0 produces an almost entirely white image and the value 0.0 produces standard image brightness.
+* `0.0` applies no sharpening
+* values greater than `0.0`, but less than `1.0` apply less than the default amount of sharpening
+* `1.0` applies the default amount of sharpening
+* values greater than `1.0` apply extra sharpening
-Note that the brightness parameter adds (or subtracts) an offset from all pixels in the output image. The `--ev` option is often more appropriate.
+==== `contrast`
-Example: `rpicam-still -o test.jpg --brightness 0.2`
+Specifies the image contrast. Accepts a numeric value along the following spectrum:
-----
- --saturation Set image colour saturation
-----
+* `0.0` applies minimum contrast
+* values greater than `0.0`, but less than `1.0` apply less than the default amount of contrast
+* `1.0` applies the default amount of contrast
+* values greater than `1.0` apply extra contrast
-The given `` adjusts the colour saturation. The value zero produces a greyscale image, the value 1.0 uses the default amount of sautration, and values greater than 1.0 apply extra colour saturation.
-Example: `rpicam-still -o test.jpg --saturation 0.8`
+==== `brightness`
-----
- --ev Set EV compensation
-----
+Specifies the image brightness, added as an offset to all pixels in the output image. Accepts a numeric value along the following spectrum:
-Sets the EV compensation of the image in units of _stops_, in the range -10 to 10. Default is 0. It works by raising or lowering the target values the AEC/AGC algorithm is attempting to match.
+* `-1.0` applies minimum brightness (black)
+* `0.0` applies standard brightness
+* `1.0` applies maximum brightness (white)
-Example: `rpicam-still -o test.jpg --ev 0.3`
+For many use cases, prefer xref:camera_software.adoc#ev[`ev`].
-----
- --shutter Set the exposure time in microseconds
-----
+==== `saturation`
-The shutter time is fixed to the given value. The gain will still be allowed to vary (unless that is also fixed).
+Specifies the image colour saturation. Accepts a numeric value along the following spectrum:
-Note that this shutter time may not be achieved if the camera is running at a frame rate that is too fast to allow it. In this case the `--framerate` option may be used to lower the frame rate. The maximum possible shutter times for the official Raspberry Pi supported can be found xref:../accessories/camera.adoc#hardware-specification[in this table].
+* `0.0` applies minimum saturation (grayscale)
+* values greater than `0.0`, but less than `1.0` apply less than the default amount of saturation
+* `1.0` applies the default amount of saturation
+* values greater than `1.0` apply extra saturation
-Using values above these maximums will result in undefined behaviour. Cameras will also have different minimum shutter times, though in practice this is not important as they are all low enough to expose bright scenes appropriately.
+==== `ev`
-Example: `rpicam-hello --shutter 30000`
+Specifies the https://en.wikipedia.org/wiki/Exposure_value[exposure value (EV)] compensation of the image in stops. Accepts a numeric value that controls target values passed to the Automatic Exposure/Gain Control (AEC/AGC) processing algorithm along the following spectrum:
-----
- --gain Sets the combined analogue and digital gains
- --analoggain Synonym for --gain
-----
+* `-10.0` applies minimum target values
+* `0.0` applies standard target values
+* `10.0` applies maximum target values
-These two options are actually identical, and set the combined analogue and digital gains that will be used. The `--analoggain` form is permitted so as to be more compatible with the legacy `raspicam` applications. Where the requested gain can be supplied by the sensor driver, then only analogue gain will be used. Once the analogue gain reaches the maximum permitted value, then extra gain beyond this will be supplied as digital gain.
+==== `shutter`
-Note that there are circumstances where the digital gain can go above 1 even when the analogue gain limit is not exceeded. This can occur when
+Specifies the exposure time, using the shutter, in _microseconds_. Gain can still vary when you use this option. If the camera runs at a framerate so fast it does not allow for the specified exposure time (for instance, a framerate of 1fps and an exposure time of 10000 microseconds), the sensor will use the maximum exposure time allowed by the framerate.
-* Either of the colour gains goes below 1.0, which will cause the digital gain to settle to 1.0/min(red_gain,blue_gain). This means that the total digital gain being applied to any colour channel does not go below 1.0, as that would cause discolouration artifacts.
-* The digital gain can vary slightly while the AEC/AGC changes, though this effect should be only transient.
+For a list of minimum and maximum shutter times for official cameras, see the xref:../accessories/camera.adoc#hardware-specification[camera hardware documentation]. Values above the maximum result in undefined behaviour.
-----
- --metering Set the metering mode
-----
+==== `gain`
+
+Alias: `--analoggain`
+
+Sets the combined analogue and digital gain. When the sensor driver can provide the requested gain, only uses analogue gain. When analogue gain reaches the maximum value, the ISP applies digital gain. Accepts a numeric value.
+
+For a list of analogue gain limits, for official cameras, see the xref:../accessories/camera.adoc#hardware-specification[camera hardware documentation].
+
+Sometimes, digital gain can exceed 1.0 even when the analogue gain limit is not exceeded. This can occur in the following situations:
+
+* Either of the colour gains drops below 1.0, which will cause the digital gain to settle to 1.0/min(red_gain,blue_gain). This keeps the total digital gain applied to any colour channel above 1.0 to avoid discolouration artefacts.
+* Slight variances during Automatic Exposure/Gain Control (AEC/AGC) changes.
-Sets the metering mode of the AEC/AGC algorithm. This may one of the following values
+==== `metering`
-* `centre` - centre weighted metering (which is the default)
+Default value: `centre`
+
+Sets the metering mode of the Automatic Exposure/Gain Control (AEC/AGC) algorithm. Accepts the following values:
+
+* `centre` - centre weighted metering
* `spot` - spot metering
* `average` - average or whole frame metering
-* `custom` - custom metering mode which would have to be defined in the camera tuning file.
+* `custom` - custom metering mode defined in the camera tuning file
-For more information on defining a custom metering mode, and also on how to adjust the region weights in the existing metering modes, please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
+For more information on defining a custom metering mode, and adjusting region weights in existing metering modes, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
-Example: `rpicam-still -o test.jpg --metering spot`
-
-----
- --exposure Set the exposure profile
-----
+==== `exposure`
-The exposure profile may be either `normal`, `sport` or `long`. Changing the exposure profile should not affect the overall exposure of an image, but the `sport` mode will tend to prefer shorter exposure times and larger gains to achieve the same net result.
+Sets the exposure profile. Changing the exposure profile should not affect the image exposure. Instead, different modes adjust gain settings to achieve the same net result. Accepts the following values:
-Exposure profiles can be edited in the camera tuning file. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera] for more information.
+* `sport`: short exposure, larger gains
+* `normal`: normal exposure, normal gains
+* `long`: long exposure, smaller gains
-Example: `rpicam-still -o test.jpg --exposure sport`
+You can edit exposure profiles using tuning files. For more information, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
-----
- --awb Set the AWB mode
-----
+==== `awb`
-This option sets the AWB algorithm into the named AWB mode. Valid modes are:
+Sets the Auto White Balance (AWB) mode. Accepts the following values:
|===
-| Mode name | Colour temperature
+| Mode name | Colour temperature range
-| auto
+| `auto`
| 2500K to 8000K
-| incandescent
+| `incandescent`
| 2500K to 3000K
-| tungsten
+| `tungsten`
| 3000K to 3500K
-| fluorescent
+| `fluorescent`
| 4000K to 4700K
-| indoor
+| `indoor`
| 3000K to 5000K
-| daylight
+| `daylight`
| 5500K to 6500K
-| cloudy
+| `cloudy`
| 7000K to 8500K
-| custom
-| A custom range would have to be defined in the camera tuning file.
+| `custom`
+| A custom range defined in the tuning file.
|===
-There is no mode that turns the AWB off, instead fixed colour gains should be specified with the `--awbgains` option.
+These values are only approximate: values could vary according to the camera tuning.
-Note that these values are only approximate, the values could vary according to the camera tuning.
+No mode fully disables AWB. Instead, you can fix colour gains with xref:camera_software.adoc#awbgains[`awbgains`].
-For more information on AWB modes and how to define a custom one, please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
+For more information on AWB modes, including how to define a custom one, see the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
-Example: `rpicam-still -o test.jpg --awb tungsten`
+==== `awbgains`
-----
- --awbgains Set fixed colour gains
-----
+Sets a fixed red and blue gain value to be used instead of an Auto White Balance (AWB) algorithm. Set non-zero values to disable AWB. Accepts comma-separated numeric input in the following format: `,`
-This option accepts a red and a blue gain value and uses them directly in place of running the AWB algorithm. Setting non-zero values here has the effect of disabling the AWB calculation.
+==== `denoise`
-Example: `rpicam-still -o test.jpg --awbgains 1.5,2.0`
+Default value: `auto`
-----
- --denoise Set the denoising mode
-----
+Sets the denoising mode. Accepts the following values:
-The following denoise modes are supported:
+* `auto`: Enables standard spatial denoise. Uses extra-fast colour denoise for video, and high-quality colour denoise for images. Enables no extra colour denoise in the preview window.
-* `auto` - This is the default. It always enables standard spatial denoise. It uses extra fast colour denoise for video, and high quality colour denoise for stills capture. Preview does not enable any extra colour denoise at all.
+* `off`: Disables spatial and colour denoise.
-* `off` - Disables spatial and colour denoise.
+* `cdn_off`: Disables colour denoise.
-* `cdn_off` - Disables colour denoise.
+* `cdn_fast`: Uses fast colour denoise.
-* `cdn_fast` - Uses fast colour denoise.
+* `cdn_hq`: Uses high-quality colour denoise. Not appropriate for video/viewfinder due to reduced throughput.
-* `cdn_hq` - Uses high quality colour denoise. Not appropriate for video/viewfinder due to reduced throughput.
+Even fast colour denoise can lower framerates. High quality colour denoise _significantly_ lowers framerates.
-Note that even the use of fast colour denoise can result in lower framerates. The high quality colour denoise will normally result in much lower framerates.
+==== `tuning-file`
-Example: `rpicam-vid -o test.h264 --denoise cdn_off`
+Specifies the camera tuning file. The tuning file allows you to control many aspects of image processing, including the Automatic Exposure/Gain Control (AEC/AGC), Auto White Balance (AWB), colour shading correction, colour processing, denoising and more. Accepts a tuning file path as input.
-----
- --tuning-file Specify the camera tuning to use
-----
+For more information about tuning files, see xref:camera_software.adoc#tuning-files[Tuning Files].
-This identifies the name of the JSON format tuning file that should be used. The tuning file covers many aspects of the image processing, including the AEC/AGC, AWB, colour shading correction, colour processing, denoising and so forth.
+==== `autofocus-mode`
-For more information on the camera tuning file, please consult the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Tuning guide for the Raspberry Pi cameras and libcamera].
+Default value: `default`
-Example: `rpicam-hello --tuning-file ~/my-camera-tuning.json`
+Specifies the autofocus mode. Accepts the following values:
-----
- --autofocus-mode Specify the autofocus mode
-----
+* `default`: puts the camera into continuous autofocus mode unless xref:camera_software.adoc#lens-position[`lens-position`] or xref:camera_software.adoc#autofocus-on-capture[`autofocus-on-capture`] override the mode to manual
+* `manual`: does not move the lens at all unless manually configured with xref:camera_software.adoc#lens-position[`lens-position`]
+* `auto`: only moves the lens for an autofocus sweep when the camera starts or just before capture if xref:camera_software.adoc#autofocus-on-capture[`autofocus-on-capture`] is also used
+* `continuous`: adjusts the lens position automatically as the scene changes
-Specifies the autofocus mode to use, which may be one of
+This option is only supported for certain camera modules.
-* `default` (also the default if the option is omitted) - normally puts the camera into continuous autofocus mode, except if either `--lens-position` or `--autofocus-on-capture` is given, in which case manual mode is chosen instead
-* `manual` - do not move the lens at all, but it can be set with the `--lens-position` option
-* `auto` - does not move the lens except for an autofocus sweep when the camera starts (and for `rpicam-still`, just before capture if `--autofocus-on-capture` is given)
-* `continuous` - adjusts the lens position automatically as the scene changes.
+==== `autofocus-range`
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+Default value: `normal`
-----
- --autofocus-range Specify the autofocus range
-----
+Specifies the autofocus range. Accepts the following values:
-Specifies the autofocus range, which may be one of
+* `normal`: focuses from reasonably close to infinity
+* `macro`: focuses only on close objects, including the closest focal distances supported by the camera
+* `full`: focus on the entire range, from the very closest objects to infinity
-* `normal` (the default) - focuses from reasonably close to infinity
-* `macro` - focuses only on close objects, including the closest focal distances supported by the camera
-* `full` - will focus on the entire range, from the very closest objects to infinity.
+This option is only supported for certain camera modules.
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+==== `autofocus-speed`
-----
- --autofocus-speed Specify the autofocus speed
-----
+Default value: `normal`
-Specifies the autofocus speed, which may be one of
+Specifies the autofocus speed. Accepts the following values:
-* `normal` (the default) - the lens position will change at the normal speed
-* `fast` - the lens position may change more quickly.
+* `normal`: changes the lens position at normal speed
+* `fast`: changes the lens position quickly
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+This option is only supported for certain camera modules.
-----
- --autofocus-window Specify the autofocus window
-----
+==== `autofocus-window`
-Specifies the autofocus window, in the form `x,y,width,height` where the coordinates are given as a proportion of the entire image. For example, `--autofocus-window 0.25,0.25,0.5,0.5` would choose a window that is half the size of the output image in each dimension, and centred in the middle.
+Specifies the autofocus window within the full field of the sensor. Accepts four decimal values, _ranged 0 to 1_, in the following format: `,,,h>`. Each of these values represents a percentage of the available width and heights as a decimal between 0 and 1.
-The default value causes the algorithm to use the middle third of the output image in both dimensions (so 1/9 of the total image area).
+These values define the following proportions:
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+* ``: X coordinates to skip before applying autofocus
+* ``: Y coordinates to skip before applying autofocus
+* ``: autofocus area width
+* ``: autofocus area height
-----
- --lens-position Set the lens to a given position
-----
+The default value uses the middle third of the output image in both dimensions (1/9 of the total image area).
-Moves the lens to a fixed focal distance, normally given in dioptres (units of 1 / _distance in metres_). We have
+Examples:
-* 0.0 will move the lens to the "infinity" position
-* Any other `number`: move the lens to the 1 / `number` position, so the value 2 would focus at approximately 0.5m
-* `default` - move the lens to a default position which corresponds to the hyperfocal position of the lens.
+* `rpicam-hello --autofocus-window 0.25,0.25,0.5,0.5` selects exactly half of the total number of pixels cropped from the centre of the image (skips the first 25% of X coordinates, skips the first 25% of Y coordinates, uses 50% of the total image width, uses 50% of the total image height).
+* `rpicam-hello --autofocus-window 0,0,0.25,0.25` selects exactly a quarter of the total number of pixels cropped from the top left of the image (skips the first 0% of X coordinates, skips the first 0% of Y coordinates, uses 25% of the image width, uses 25% of the image height).
-It should be noted that lenses can only be expected to be calibrated approximately, and there may well be variation between different camera modules.
+This option is only supported for certain camera modules.
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+==== `lens-position`
+Default value: `default`
-==== Output File Options
+Moves the lens to a fixed focal distance, normally given in dioptres (units of 1 / _distance in metres_). Accepts the following spectrum of values:
-----
- --output, -o Output file name
-----
+* `0.0`: moves the lens to the "infinity" position
+* Any other `number`: moves the lens to the 1 / `number` position. For example, the value `2.0` would focus at approximately 0.5m
+* `default`: move the lens to a default position which corresponds to the hyperfocal position of the lens
+
+Lens calibration is imperfect, so different camera modules of the same model may vary.
+
+==== `verbose`
+
+Alias: `-v`
+
+Default value: `1`
+
+Sets the verbosity level. Accepts the following values:
+
+* `0`: no output
+* `1`: normal output
+* `2`: verbose output
-`--output` sets the name of the output file to which the output image or video is written. Besides regular file names, this may take the following special values:
+=== Output file options
-* `-` - write to stdout
-* `udp://` - a string starting with this is taken as a network address for streaming
-* `tcp://` - a string starting with this is taken as a network address for streaming
-* a string containing a `%d` directive is taken as a file name where the format directive is replaced with a count that increments for each file that is opened. Standard C format directive modifiers are permitted.
+==== `output`
+
+Alias: `-o`
+
+Sets the name of the file used to record images or video. Besides plaintext file names, accepts the following special values:
+
+* `-`: write to stdout.
+* `udp://` (prefix): a network address for UDP streaming.
+* `tcp://` (prefix): a network address for TCP streaming.
+* Include the `%d` directive in the file name to replace the directive with a count that increments for each opened file. This directive supports standard C format directive modifiers.
Examples:
-`rpicam-vid -t 100000 --segment 10000 -o chunk%04d.h264` records a 100 second file in 10 second segments, where each file is named `chunk.h264` but with the inclusion of an incrementing counter. Note that `%04d` writes the count to a string, but padded up to a total width of at least 4 characters by adding leading zeroes.
+* `rpicam-vid -t 100000 --segment 10000 -o chunk%04d.h264` records a 100 second file in 10 second segments, where each file includes an incrementing four-digit counter padded with leading zeros: e.g. `chunk0001.h264`, `chunk0002.h264`, etc.
-`rpicam-vid -t 0 --inline -o udp://192.168.1.13:5000` stream H.264 video to network address 192.168.1.13 on port 5000.
+* `rpicam-vid -t 0 --inline -o udp://192.168.1.13:5000` streams H.264 video to network address 192.168.1.13 using UDP on port 5000.
-----
- --wrap Wrap output file counter at
-----
+==== `wrap`
-When outputting to files with an incrementing counter (e.g. `%d` in the output file name), wrap the counter back to zero when it reaches this value.
+Sets a maximum value for the counter used by the xref:camera_software.adoc#output[`output`] `%d` directive. The counter resets to zero after reaching this value. Accepts a numeric value.
-Example: `rpicam-vid -t 0 --codec mjpeg --segment 1 --wrap 100 -o image%d.jpg`
+==== `flush`
-----
- --flush Flush output files immediately
-----
+Flushes output files to disk as soon as a frame finishes writing, instead of waiting for the system to handle it. Does not accept a value.
-`--flush` causes output files to be flushed to disk as soon as every frame is written, rather than waiting for the system to do it.
+==== `post-process-file`
-Example: `rpicam-vid -t 10000 --flush -o test.h264`
+Specifies a JSON file that configures the post-processing applied by the imaging pipeline. This applies to camera images _before_ they reach the application. This works similarly to the legacy `raspicam` "image effects". Accepts a file name path as input.
-==== Post Processing Options
+Post-processing is a large topic and admits the use of third-party software like OpenCV and TensorFlowLite to analyse and manipulate images. For more information, see xref:camera_software.adoc#post-processing-with-rpicam-apps[post-processing].
-The `--post-process-file` option specifies a JSON file that configures the post-processing that the imaging pipeline applies to camera images before they reach the application. It can be thought of as a replacement for the legacy `raspicam` "image effects".
+==== `buffer-count`
-Post-processing is a large topic and admits the use of 3rd party software like OpenCV and TensorFlowLite to analyse and manipulate images. For more information, please refer to the section on xref:camera_software.adoc#post-processing[post-processing].
+The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a reasonable number for its own use case (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.
-Example: `rpicam-hello --post-process-file negate.json`
+==== `viewfinder-buffer-count`
-This might apply a "negate" effect to an image, if the file `negate.json` is appropriately configured.
+As the `buffer-count` option, but applies when running in preview mode (that is `rpicam-hello` or the preview, not capture, phase of `rpicam-still`).
diff --git a/documentation/asciidoc/computers/camera/rpicam_options_detect.adoc b/documentation/asciidoc/computers/camera/rpicam_options_detect.adoc
new file mode 100644
index 0000000000..298116505c
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/rpicam_options_detect.adoc
@@ -0,0 +1,15 @@
+=== Detection options
+
+The command line options specified in this section apply only to object detection using `rpicam-detect`.
+
+To pass one of the following options to `rpicam-detect`, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes.
+
+Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability.
+
+==== `object`
+
+Detects objects with the given name, sourced from the model's label file. Accepts a plaintext file name as input.
+
+==== `gap`
+
+Wait at least this many frames between captures. Accepts numeric values.
diff --git a/documentation/asciidoc/computers/camera/rpicam_options_libav.adoc b/documentation/asciidoc/computers/camera/rpicam_options_libav.adoc
new file mode 100644
index 0000000000..3b1f2ce199
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/rpicam_options_libav.adoc
@@ -0,0 +1,65 @@
+=== `libav` options
+
+The command line options specified in this section apply only to `libav` video backend.
+
+To enable the `libav` backend, pass the xref:camera_software.adoc#codec[`codec`] option the value `libav`.
+
+To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes.
+
+Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability.
+
+==== `libav-format`
+
+Sets the `libav` output format. Accepts the following values:
+
+* `mkv` encoding
+* `mp4` encoding
+* `avi` encoding
+* `h264` streaming
+* `mpegts` streaming
+
+If you do not provide this option, the file extension passed to the xref:camera_software.adoc#output[`output`] option determines the file format.
+
+==== `libav-audio`
+
+Enables audio recording. When enabled, you must also specify an xref:camera_software.adoc#audio-codec[`audio-codec`]. Does not accept a value.
+
+==== `audio-codec`
+
+Default value: `aac`
+
+Selects an audio codec for output. For a list of available codecs, run `ffmpeg -codecs`.
+
+==== `audio-bitrate`
+
+Sets the bitrate for audio encoding in bits per second. Accepts numeric input.
+
+Example: `rpicam-vid --codec libav -o test.mp4 --audio_codec mp2 --audio-bitrate 16384` (Records audio at 16 kilobits/sec with the mp2 codec)
+
+==== `audio-samplerate`
+
+Default value: `0`
+
+Sets the audio sampling rate in Hz. Accepts numeric input. `0` uses the input sample rate.
+
+==== `audio-device`
+
+Select an ALSA input device for audio recording. For a list of available devices, run the following command:
+
+[source,console]
+----
+$ pactl list | grep -A2 'Source #' | grep 'Name: '
+----
+
+You should see output similar to the following:
+
+----
+Name: alsa_output.platform-bcm2835_audio.analog-stereo.monitor
+Name: alsa_output.platform-fef00700.hdmi.hdmi-stereo.monitor
+Name: alsa_output.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.analog-stereo.monitor
+Name: alsa_input.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.mono-fallback
+----
+
+==== `av-sync`
+
+Shifts the audio sample timestamp by a value in microseconds. Accepts positive and negative numeric values.
diff --git a/documentation/asciidoc/computers/camera/rpicam_options_still.adoc b/documentation/asciidoc/computers/camera/rpicam_options_still.adoc
index fddb156251..4e20880dc7 100644
--- a/documentation/asciidoc/computers/camera/rpicam_options_still.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_options_still.adoc
@@ -1,148 +1,126 @@
-=== Still Command Line Options
+=== Image options
-----
- --quality, -q JPEG quality
-----
+The command line options specified in this section apply only to still image output.
-Set the JPEG quality. 100 is maximum quality and 93 is the default. Only applies when saving JPEG files.
+To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes.
-Example: `rpicam-jpeg -o test.jpg -q 80`
+Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability.
-----
- --exif, -x Add extra EXIF tags
-----
+==== `quality`
-The given extra EXIF tags are saved in the JPEG file. Only applies when saving JPEG files.
+Alias: `-q`
-EXIF is supported using the `libexif` library and so there are some associated limitations. In particular, `libexif` seems to recognise a number of tags but without knowing the correct format for them. The software will currently treat these (incorrectly, in many cases) as ASCII, but will print a warning to the terminal. As we come across these they can be added to the table of known exceptions in the software.
+Default value: `93`
-Clearly the application needs to supply EXIF tags that contain specific camera data (like the exposure time). But for other tags that have nothing to do with the camera, a reasonable workaround would simply be to add them _post facto_, using something like `exiftool`.
+Sets the JPEG quality. Accepts a value between `1` and `100`.
-Example: `rpicam-still -o test.jpg --exif IDO0.Artist=Someone`
+==== `exif`
+
+Saves extra EXIF tags in the JPEG output file. Only applies to JPEG output. Because of limitations in the `libexif` library, many tags are currently (incorrectly) formatted as ASCII and print a warning in the terminal.
-----
- --timelapse Time interval between timelapse captures
-----
+This option is necessary to add certain EXIF tags related to camera settings. You can add tags unrelated to camera settings to the output JPEG after recording with https://exiftool.org/[ExifTool].
+
+Example: `rpicam-still -o test.jpg --exif IDO0.Artist=Someone`
-This puts `rpicam-still` into timelapse mode where it runs according to the timeout (`--timeout` or `-t`) that has been set, and for that period will capture repeated images at the interval specified here. (`rpicam-still` only.)
+==== `timelapse`
-Example: `rpicam-still -t 100000 -o test%d.jpg --timelapse 10000` captures an image every 10s for about 100s.
+Records images at the specified interval. Accepts an interval in milliseconds. Combine this setting with xref:camera_software.adoc#timeout[`timeout`] to capture repeated images over time.
-----
- --framestart The starting value for the frame counter
-----
+You can specify separate filenames for each output file using string formatting, e.g. `--output test%d.jpg`.
-When writing counter values into the output file name, this specifies the starting value for the counter.
+Example: `rpicam-still -t 100000 -o test%d.jpg --timelapse 10000` captures an image every 10 seconds for 100 seconds.
-Example: `rpicam-still -t 100000 -o test%d.jpg --timelapse 10000 --framestart 1` captures an image every 10s for about 100s, starting at 1 rather than 0. (`rpicam-still` only.)
+==== `framestart`
-----
- --datetime Use date format for the output file names
-----
+Configures a starting value for the frame counter accessed in output file names as `%d`. Accepts an integer starting value.
-Use the current date and time to construct the output file name, in the form MMDDhhmmss.jpg, where MM = 2-digit month number, DD = 2-digit day number, hh = 2-digit 24-hour hour number, mm = 2-digit minute number, ss = 2-digit second number. (`rpicam-still` only.)
+==== `datetime`
-Example: `rpicam-still --datetime`
+Uses the current date and time in the output file name, in the form `MMDDhhmmss.jpg`:
-----
- --timestamp Use system timestamps for the output file names
-----
+* `MM` = 2-digit month number
+* `DD` = 2-digit day number
+* `hh` = 2-digit 24-hour hour number
+* `mm` = 2-digit minute number
+* `ss` = 2-digit second number
-Uses the current system timestamp (the number of seconds since the start of 1970) as the output file name. (`rpicam-still` only.)
+Does not accept a value.
-Example: `rpicam-still --timestamp`
+==== `timestamp`
-----
- --restart Set the JPEG restart interval
-----
+Uses the current system https://en.wikipedia.org/wiki/Unix_time[Unix time] as the output file name. Does not accept a value.
-Sets the JPEG restart interval to the given value. Default is zero.
+==== `restart`
-Example: `rpicam-still -o test.jpg --restart 20`
+Default value: `0`
-----
- --keypress, -k Capture image when Enter pressed
-----
+Configures the restart marker interval for JPEG output. JPEG restart markers can help limit the impact of corruption on JPEG images, and additionally enable the use of multi-threaded JPEG encoding and decoding. Accepts an integer value.
-This switches `rpicam-still` into keypress mode. It will capture a still image either when the timeout expires or the Enter key is pressed in the terminal window. Typing `x` and Enter causes `rpicam-still` to quit without capturing.
+==== `immediate`
-Example: `rpicam-still -t 0 -o test.jpg -k`
+Captures the image immediately when the application runs.
-----
- --signal, -s Capture image when SIGUSR1 received
-----
+==== `keypress`
-This switches `rpicam-still` into signal mode. It will capture a still image either when the timeout expires or a SIGUSR1 is received. SIGUSR2 will cause `rpicam-still` to quit without capturing.
+Alias: `-k`
-Example:
+Captures an image when the xref:camera_software.adoc#timeout[`timeout`] expires or on press of the *Enter* key, whichever comes first. Press the `x` key, then *Enter* to exit without capturing. Does not accept a value.
-`rpicam-still -t 0 -o test.jpg -s &`
+==== `signal`
-then
+Captures an image when the xref:camera_software.adoc#timeout[`timeout`] expires or when `SIGUSR1` is received. Use `SIGUSR2` to exit without capturing. Does not accept a value.
-`kill -SIGUSR1 $!`
+==== `thumb`
-----
- --thumb Set thumbnail parameters or none
-----
+Default value: `320:240:70`
-Sets the dimensions and quality parameter of the associated thumbnail image. The defaults are size 320x240 and quality 70.
+Configure the dimensions and quality of the thumbnail with the following format: `` (or `none`, which omits the thumbnail).
-Example: `rpicam-still -o test.jpg --thumb 640:480:80`
+==== `encoding`
-The value `none` may be given, in which case no thumbnail is saved in the image at all.
+Alias: `-e`
-----
- --encoding, -e Set the still image codec
-----
+Default value: `jpg`
-Select the still image encoding to be used. Valid encoders are:
+Sets the encoder to use for image output. Accepts the following values:
-* `jpg` - JPEG (the default)
-* `png` - PNG format
-* `bmp` - BMP format
+* `jpg` - JPEG
+* `png` - PNG
+* `bmp` - BMP
* `rgb` - binary dump of uncompressed RGB pixels
-* `yuv420` - binary dump of uncompressed YUV420 pixels.
+* `yuv420` - binary dump of uncompressed YUV420 pixels
-Note that this option determines the encoding and that the extension of the output file name is ignored for this purpose. However, for the `--datetime` and `--timestamp` options, the file extension is taken from the encoder name listed above. (`rpicam-still` only.)
+This option always determines the encoding, overriding the extension passed to xref:camera_software.adoc#output[`output`].
-Example: `rpicam-still -e png -o test.png`
+When using the xref:camera_software.adoc#datetime[`datetime`] and xref:camera_software.adoc#timestamp[`timestamp`] options, this option determines the output file extension.
-----
- --raw, -r Save raw file
-----
+==== `raw`
-Save a raw Bayer file in DNG format alongside the usual output image. The file name is given by replacing the output file name extension by `.dng`. These are standard DNG files, and can be processed with standard tools like _dcraw_ or _RawTherapee_, among others. (`rpicam-still` only.)
+Alias: `-r`
-The image data in the raw file is exactly what came out of the sensor, with no processing whatsoever either by the ISP or anything else. The EXIF data saved in the file, among other things, includes:
+Saves a raw Bayer file in DNG format in addition to the output image. Replaces the output file name extension with `.dng`. You can process these standard DNG files with tools like _dcraw_ or _RawTherapee_. Does not accept a value.
+
+The image data in the raw file is exactly what came out of the sensor, with no processing from the ISP or anything else. The EXIF data saved in the file, among other things, includes:
* exposure time
* analogue gain (the ISO tag is 100 times the analogue gain used)
* white balance gains (which are the reciprocals of the "as shot neutral" values)
-* the colour matrix used by the ISP.
-
-----
- --latest Make symbolic link to latest file saved
-----
-
-This causes `rpicam-still` to make a symbolic link to the most recently saved file, thereby making it easier to identify. (`rpicam-still` only.)
+* the colour matrix used by the ISP
-Example: `rpicam-still -t 100000 --timelapse 10000 -o test%d.jpg --latest latest.jpg`
+==== `latest`
-----
- --autofocus-on-capture Whether to run an autofocus cycle before capture
-----
+Creates a symbolic link to the most recently saved file. Accepts a symbolic link name as input.
-If set, this will cause an autofocus cycle to be run just before the image is captured.
+==== `autofocus-on-capture`
-If `--autofocus-mode` is not specified, or was set to `default` or `manual`, this will be the only autofocus cycle.
+If set, runs an autofocus cycle _just before_ capturing an image. Interacts with the following xref:camera_software.adoc#autofocus-mode[`autofocus_mode`] values:
-If `--autofocus-mode` was set to `auto`, there will be an additional autofocus cycle at the start of the preview window.
+* `default` or `manual`: only runs the capture-time autofocus cycle.
-If `--autofocus-mode` was set to `continuous`, this option will be ignored.
+* `auto`: runs an additional autofocus cycle when the preview window loads.
-You can also use `--autofocus-on-capture 1` in place of `--autofocus-on-capture`, and `--autofocus-on-capture 0` as an alternative to omitting the parameter entirely.
+* `continuous`: ignores this option, instead continually focusing throughout the preview.
-Example: `rpicam-still --autofocus-on-capture -o test.jpg`
+Does not require a value, but you can pass `1` to enable and `0` to disable. Not passing a value is equivalent to passing `1`.
-This option is only supported for certain camera modules (such as the _Raspberry Pi Camera Module 3_).
+Only supported by some camera modules (such as the _Raspberry Pi Camera Module 3_).
diff --git a/documentation/asciidoc/computers/camera/rpicam_options_vid.adoc b/documentation/asciidoc/computers/camera/rpicam_options_vid.adoc
index 26e739523f..00ac1a2589 100644
--- a/documentation/asciidoc/computers/camera/rpicam_options_vid.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_options_vid.adoc
@@ -1,138 +1,141 @@
-=== Video Command Line Options
+=== Video options
-----
- --quality, -q JPEG quality
-----
+The command line options specified in this section apply only to video output.
-Set the JPEG quality. 100 is maximum quality and 50 is the default. Only applies when saving in MJPEG format.
+To pass one of the following options to an application, prefix the option name with `--`. If the option requires a value, pass the value immediately after the option name, separated by a single space. If the value contains a space, surround the value in quotes.
-Example: `rpicam-vid --codec mjpeg -o test.mjpeg -q 80`
+Some options have shorthand aliases, for example `-h` instead of `--help`. Use these shorthand aliases instead of the full option name to save space and time at the expense of readability.
-----
- --bitrate, -b H.264 bitrate
-----
+==== `quality`
+
+Alias: `-q`
+
+Default value: `50`
+
+Accepts an MJPEG quality level between 1 and 100. Only applies to videos encoded in the MJPEG format.
+
+==== `bitrate`
+
+Alias: `-b`
+
+Controls the target bitrate used by the H.264 encoder in bits per second. Only applies to videos encoded in the H.264 format. Impacts the size of the output video.
-Set the target bitrate for the H.264 encoder, in _bits per second_. Only applies when encoding in H.264 format.
Example: `rpicam-vid -b 10000000 --width 1920 --height 1080 -o test.h264`
-----
- --intra, -g Intra-frame period (H.264 only)
-----
+==== `intra`
-Sets the frequency of I (Intra) frames in the H.264 bitstream, as a number of frames. The default value is 60.
+Alias: `-g`
-Example: `rpicam-vid --intra 30 --width 1920 --height 1080 -o test.h264`
+Default value: `60`
-----
- --profile H.264 profile
-----
+Sets the frequency of Iframes (intra frames) in the H.264 bitstream. Accepts a number of frames. Only applies to videos encoded in the H.264 format.
-Set the H.264 profile. The value may be `baseline`, `main` or `high`.
+==== `profile`
-Example: `rpicam-vid --width 1920 --height 1080 --profile main -o test.h264`
+Sets the H.264 profile. Accepts the following values:
-----
- --level H.264 level
-----
+* `baseline`
+* `main`
+* `high`
-Set the H.264 level. The value may be `4`, `4.1` or `4.2`.
+Only applies to videos encoded in the H.264 format.
-Example: `rpicam-vid --width 1920 --height 1080 --level 4.1 -o test.h264`
+==== `level`
-----
- --codec Encoder to be used
-----
+Sets the H.264 level. Accepts the following values:
-This can select how the video frames are encoded. Valid options are:
+* `4`
+* `4.1`
+* `4.2`
-* h264 - use H.264 encoder (the default)
-* mjpeg - use MJPEG encoder
-* yuv420 - output uncompressed YUV420 frames.
-* libav - use the libav backend to encode audio and video (see the xref:camera_software.adoc#libav-integration-with-rpicam-vid[libav section] for further details).
+Only applies to videos encoded in the H.264 format.
-Examples:
+==== `codec`
-`rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg`
+Sets the encoder to use for video output. Accepts the following values:
-`rpicam-vid -t 10000 --codec yuv420 -o test.data`
+* `h264` - use H.264 encoder (the default)
+* `mjpeg` - use MJPEG encoder
+* `yuv420` - output uncompressed YUV420 frames.
+* `libav` - use the libav backend to encode audio and video (for more information, see xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav`])
-----
- --keypress, -k Toggle between recording and pausing
-----
+==== `save-pts`
+
+WARNING: Raspberry Pi 5 does not support the `save-pts` option. Use `libav` to automatically generate timestamps for container formats instead.
-Pressing Enter will toggle `rpicam-vid` between recording the video stream and not recording it (i.e. discarding it). The application starts off in the recording state, unless the `--initial` option specifies otherwise. Typing `x` and Enter causes `rpicam-vid` to quit.
+Enables frame timestamp output, which allow you to convert the bitstream into a container format using a tool like `mkvmerge`. Accepts a plaintext file name for the timestamp output file.
-Example: `rpicam-vid -t 0 -o test.h264 -k`
+Example: `rpicam-vid -o test.h264 --save-pts timestamps.txt`
+You can then use the following command to generate an MKV container file from the bitstream and timestamps file:
+
+[source,console]
----
- --signal, -s Toggle between recording and pausing when SIGUSR1 received
+$ mkvmerge -o test.mkv --timecodes 0:timestamps.txt test.h264
----
-The SIGUSR1 signal will toggle `rpicam-vid` between recording the video stream and not recording it (i.e. discarding it). The application starts off in the recording state, unless the `--initial` option specifies otherwise. SIGUSR2 causes `rpicam-vid` to quit.
+==== `keypress`
-Example:
+Alias: `-k`
-`rpicam-vid -t 0 -o test.h264 -s`
+Allows the CLI to enable and disable video output using the *Enter* key. Always starts in the recording state unless specified otherwise with xref:camera_software.adoc#initial[`initial`]. Type the `x` key and press *Enter* to exit. Does not accept a value.
-then
+==== `signal`
-`kill -SIGUSR1 $!`
+Alias: `-s`
-----
- --initial Start the application in the recording or paused state
-----
+Allows the CLI to enable and disable video output using `SIGUSR1`. Use `SIGUSR2` to exit. Always starts in the recording state unless specified otherwise with xref:camera_software.adoc#initial[`initial`]. Does not accept a value.
-The value passed may be `record` or `pause` to start the application in, respectively, the recording or the paused state. This option should be used in conjunction with either `--keypress` or `--signal` to toggle between the two states.
+==== `initial`
-Example: `rpicam-vid -t 0 -o test.h264 -k --initial pause`
+Default value: `record`
-----
- --split Split multiple recordings into separate files
-----
+Specifies whether to start the application with video output enabled or disabled. Accepts the following values:
-This option should be used in conjunction with `--keypress` or `--signal` and causes each recording session (in between the pauses) to be written to a separate file.
+* `record`: Starts with video output enabled.
+* `pause`: Starts with video output disabled.
-Example: `rpicam-vid -t 0 --keypress --split --initial pause -o test%04d.h264`
+Use this option with either xref:camera_software.adoc#keypress[`keypress`] or xref:camera_software.adoc#signal[`signal`] to toggle between the two states.
-----
- --segment Write the video recording into multiple segments
-----
+==== `split`
-This option causes the video recording to be split across multiple files where the parameter gives the approximate duration of each file in milliseconds.
+When toggling recording with xref:camera_software.adoc#keypress[`keypress`] or xref:camera_software.adoc#signal[`signal`], writes the video output from separate recording sessions into separate files. Does not accept a value. Unless combined with xref:camera_software.adoc#output[`output`] to specify unique names for each file, overwrites each time it writes a file.
-One convenient little trick is to pass a very small duration parameter (namely, `--segment 1`) which will result in each frame being written to a separate output file. This makes it easy to do "burst" JPEG capture (using the MJPEG codec), or "burst" raw frame capture (using `rpicam-raw`).
+==== `segment`
-Example: `rpicam-vid -t 100000 --segment 10000 -o test%04d.h264`
+Cuts video output into multiple files of the passed duration. Accepts a duration in milliseconds. If passed a very small duration (for instance, `1`), records each frame to a separate output file to simulate burst capture.
-----
- --circular Write the video recording into a circular buffer of the given
-----
+You can specify separate filenames for each file using string formatting, e.g. `--output test%04d.h264`.
-The video recording is written to a circular buffer which is written to disk when the application quits. The size of the circular buffer may be given in units of megabytes, defaulting to 4MB.
+==== `circular`
-Example: `rpicam-vid -t 0 --keypress --inline --circular -o test.h264`
+Default value: `4`
-----
- --inline Write sequence header in every I frame (H.264 only)
-----
+Writes video recording into a circular buffer in memory. When the application quits, records the circular buffer to disk. Accepts an optional size in megabytes.
-This option causes the H.264 sequence headers to be written into every I (Intra) frame. This is helpful because it means a client can understand and decode the video sequence from any I frame, not just from the very beginning of the stream. It is recommended to use this option with any output type that breaks the output into pieces (`--segment`, `--split`, `--circular`), or transmits the output over a network.
+==== `inline`
-Example: `rpicam-vid -t 0 --keypress --inline --split -o test%04d.h264`
+Writes a sequence header in every Iframe (intra frame). This can help clients decode the video sequence from any point in the video, instead of just the beginning. Recommended with xref:camera_software.adoc#segment[`segment`], xref:camera_software.adoc#split[`split`], xref:camera_software.adoc#circular[`circular`], and streaming options.
-----
- --listen Wait for an incoming TCP connection
-----
+Only applies to videos encoded in the H.264 format. Does not accept a value.
-This option is provided for streaming over a network using TCP/IP. Using `--listen` will cause `rpicam-vid` to wait for an incoming client connection before starting the video encode process, which will then be forwarded to that client.
+==== `listen`
-Example: `rpicam-vid -t 0 --inline --listen -o tcp://0.0.0.0:8123`
+Waits for an incoming client connection before encoding video. Intended for network streaming over TCP/IP. Does not accept a value.
-----
- --frames Record exactly this many frames
-----
+==== `frames`
+
+Records exactly the specified number of frames. Any non-zero value overrides xref:camera_software.adoc#timeout[`timeout`]. Accepts a nonzero integer.
+
+==== `framerate`
+
+Records exactly the specified framerate. Accepts a nonzero integer.
+
+==== `low-latency`
+
+On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmetic coding will no longer be used).
-Exactly `` frames are recorded. Specifying a non-zero value will override any timeout.
+==== `sync`
-Example: `rpicam-vid -o test.h264 --frames 1000`
+Run the camera in software synchronisation mode, where multiple cameras synchronise frames to the same moment in time. The `sync` mode can be set to either `client` or `server`. For more information, please refer to the detailed explanation of xref:camera_software.adoc#software-camera-synchronisation[how software synchronisation works].
\ No newline at end of file
diff --git a/documentation/asciidoc/computers/camera/rpicam_raw.adoc b/documentation/asciidoc/computers/camera/rpicam_raw.adoc
index ec9f55bad9..210e0e20ae 100644
--- a/documentation/asciidoc/computers/camera/rpicam_raw.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_raw.adoc
@@ -1,23 +1,26 @@
=== `rpicam-raw`
-`rpicam-raw` is like a video recording application except that it records raw Bayer frames directly from the sensor. It does not show a preview window. For a 2 second raw clip use
+`rpicam-raw` records video as raw Bayer frames directly from the sensor. It does not show a preview window. To record a two second raw clip to a file named `test.raw`, run the following command:
-[,bash]
+[source,console]
----
-rpicam-raw -t 2000 -o test.raw
+$ rpicam-raw -t 2000 -o test.raw
----
-The raw frames are dumped with no formatting information at all, one directly after another. The application prints the pixel format and image dimensions to the terminal window so that the user can know how to interpret the pixel data.
+`rpicam-raw` outputs raw frames with no formatting information at all, one directly after another. The application prints the pixel format and image dimensions to the terminal window to help the user interpret the pixel data.
-By default the raw frames are saved in a single (potentially very large) file. As we saw previously, the `--segment` option can be used conveniently to direct each to a separate file.
-[,bash]
+By default, `rpicam-raw` outputs raw frames in a single, potentially very large, file. Use the xref:camera_software.adoc#segment[`segment`] option to direct each raw frame to a separate file, using the `%05d` xref:camera_software.adoc#output[directive] to make each frame filename unique:
+
+[source,console]
----
-rpicam-raw -t 2000 --segment 1 -o test%05d.raw
+$ rpicam-raw -t 2000 --segment 1 -o test%05d.raw
----
-In good conditions (using a fast SSD) `rpicam-raw` can get close to writing 12MP HQ camera frames (18MB of data each) to disk at 10 frames per second. It writes the raw frames with no formatting in order to achieve these speeds; it has no capability to save them as DNG files (like `rpicam-still`). If you want to be sure not to drop frames you could reduce the framerate slightly using the `--framerate` option, for example
+With a fast storage device, `rpicam-raw` can write 18MB 12-megapixel HQ camera frames to disk at 10fps. `rpicam-raw` has no capability to format output frames as DNG files; for that functionality, use xref:camera_software.adoc#rpicam-still[`rpicam-still`]. Use the xref:camera_software.adoc#framerate[`framerate`] option at a level beneath 10 to avoid dropping frames:
-[,bash]
+[source,console]
----
-rpicam-raw -t 5000 --width 4056 --height 3040 -o test.raw --framerate 8
+$ rpicam-raw -t 5000 --width 4056 --height 3040 -o test.raw --framerate 8
----
+
+For more information on the raw formats, see the xref:camera_software.adoc#mode[`mode` documentation].
diff --git a/documentation/asciidoc/computers/camera/rpicam_still.adoc b/documentation/asciidoc/computers/camera/rpicam_still.adoc
index 52e1d05b56..08ec164e0a 100644
--- a/documentation/asciidoc/computers/camera/rpicam_still.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_still.adoc
@@ -1,39 +1,44 @@
=== `rpicam-still`
-`rpicam-still` is very similar to `rpicam-jpeg` but supports more of the legacy `raspistill` options. As before, a single image can be captured with
+`rpicam-still`, like `rpicam-jpeg`, helps you capture images on Raspberry Pi devices.
+Unlike `rpicam-jpeg`, `rpicam-still` supports many options provided in the legacy `raspistill` application.
-[,bash]
+To capture a full resolution JPEG image and save it to a file named `test.jpg`, run the following command:
+
+[source,console]
----
-rpicam-still -o test.jpg
+$ rpicam-still --output test.jpg
----
==== Encoders
-`rpicam-still` allows files to be saved in a number of different formats. It supports both `png` and `bmp` encoding. It also allows files to be saved as a binary dump of RGB or YUV pixels with no encoding or file format at all. In these latter cases the application reading the files will have to understand the pixel arrangement for itself.
+`rpicam-still` can save images in multiple formats, including `png`, `bmp`, and both RGB and YUV binary pixel dumps. To read these binary dumps, any application reading the files must understand the pixel arrangement.
+
+Use the xref:camera_software.adoc#encoding[`encoding`] option to specify an output format. The file name passed to xref:camera_software.adoc#output[`output`] has no impact on the output file type.
+
+To capture a full resolution PNG image and save it to a file named `test.png`, run the following command:
-[,bash]
+[source,console]
----
-rpicam-still -e png -o test.png
-rpicam-still -e bmp -o test.bmp
-rpicam-still -e rgb -o test.data
-rpicam-still -e yuv420 -o test.data
+$ rpicam-still --encoding png --output test.png
----
-Note that the format in which the image is saved depends on the `-e` (equivalently `--encoding`) option and is _not_ selected automatically based on the output file name.
-==== Raw Image Capture
+For more information about specifying an image format, see the xref:camera_software.adoc#encoding[`encoding` option reference].
-_Raw_ images are the images produced directly by the image sensor, before any processing is applied to them either by the ISP (Image Signal Processor) or any of the CPU cores. For colour image sensors these are usually _Bayer_ format images. Note that _raw_ images are quite different from the processed but unencoded RGB or YUV images that we saw earlier.
+==== Capture raw images
-To capture a raw image use
+Raw images are the images produced directly by the image sensor, before any processing is applied to them either by the Image Signal Processor (ISP) or CPU. Colour image sensors usually use the Bayer format. Use the xref:camera_software.adoc#raw[`raw`] option to capture raw images.
-[,bash]
+To capture an image, save it to a file named `test.jpg`, and also save a raw version of the image to a file named `test.dng`, run the following command:
+
+[source,console]
----
-rpicam-still -r -o test.jpg
+$ rpicam-still --raw --output test.jpg
----
-Here, the `-r` option (also `--raw`) indicates to capture the raw image as well as the JPEG. In fact, the raw image is the exact image from which the JPEG was produced. Raw images are saved in DNG (Adobe Digital Negative) format and are compatible with many standard applications, such as _dcraw_ or _RawTherapee_. The raw image is saved to a file with the same name but the extension `.dng`, thus `test.dng` in this case.
+`rpicam-still` saves raw images in the DNG (Adobe Digital Negative) format. To determine the filename of the raw images, `rpicam-still` uses the same name as the output file, with the extension changed to `.dng`. To work with DNG images, use an application like https://en.wikipedia.org/wiki/Dcraw[Dcraw] or https://en.wikipedia.org/wiki/RawTherapee[RawTherapee].
-These DNG files contain metadata pertaining to the image capture, including black levels, white balance information and the colour matrix used by the ISP to produce the JPEG. This makes these DNG files much more convenient for later "by hand" raw conversion with some of the aforementioned tools. Using `exiftool` shows all the metadata encoded into the DNG file:
+DNG files contain metadata about the image capture, including black levels, white balance information and the colour matrix used by the ISP to produce the JPEG. Use https://exiftool.org/[ExifTool] to view DNG metadata. The following output shows typical metadata stored in a raw image captured by a Raspberry Pi using the HQ camera:
----
File Name : test.dng
@@ -79,14 +84,123 @@ Image Size : 4056x3040
Megapixels : 12.3
Shutter Speed : 1/20
----
-We note that there is only a single calibrated illuminant (the one determined by the AWB algorithm even though it gets labelled always as "D65"), and that dividing the ISO number by 100 gives the analogue gain that was being used.
-==== Very long exposures
+To find the analogue gain, divide the ISO number by 100.
+The Auto White Balance (AWB) algorithm determines a single calibrated illuminant, which is always labelled `D65`.
+
+==== Capture long exposures
+
+To capture very long exposure images, disable the Automatic Exposure/Gain Control (AEC/AGC) and Auto White Balance (AWB). These algorithms will otherwise force the user to wait for a number of frames while they converge.
+
+To disable these algorithms, supply explicit values for gain and AWB. Because long exposures take plenty of time already, it often makes sense to skip the preview phase entirely with the xref:camera_software.adoc#immediate[`immediate`] option.
+
+To perform a 100 second exposure capture, run the following command:
+
+[source,console]
+----
+$ rpicam-still -o long_exposure.jpg --shutter 100000000 --gain 1 --awbgains 1,1 --immediate
+----
+
+To find the maximum exposure times of official Raspberry Pi cameras, see xref:../accessories/camera.adoc#hardware-specification[the camera hardware specification].
+
+==== Create a time lapse video
+
+To create a time lapse video, capture a still image at a regular interval, such as once a minute, then use an application to stitch the pictures together into a video.
+
+[tabs]
+======
+`rpicam-still` time lapse mode::
++
+To use the built-in time lapse mode of `rpicam-still`, use the xref:camera_software.adoc#timelapse[`timelapse`] option. This option accepts a value representing the period of time you want your Raspberry Pi to wait between captures, in milliseconds.
++
+First, create a directory where you can store your time lapse photos:
++
+[source,console]
+----
+$ mkdir timelapse
+----
++
+Run the following command to create a time lapse over 30 seconds, recording a photo every two seconds, saving output into `image0000.jpg` through `image0013.jpg`:
++
+[source,console]
+----
+$ rpicam-still --timeout 30000 --timelapse 2000 -o timelapse/image%04d.jpg
+----
+
+`cron`::
++
+You can also automate time lapses with `cron`. First, create the script, named `timelapse.sh` containing the following commands. Replace the `` placeholder with the name of your user account on your Raspberry Pi:
++
+[source,bash]
+----
+#!/bin/bash
+DATE=$(date +"%Y-%m-%d_%H%M")
+rpicam-still -o /home//timelapse/$DATE.jpg
+----
++
+Then, make the script executable:
++
+[source,console]
+----
+$ chmod +x timelapse.sh
+----
++
+Create the `timelapse` directory into which you'll save time lapse pictures:
++
+[source,console]
+----
+$ mkdir timelapse
+----
++
+Open your crontab for editing:
++
+[source,console]
+----
+$ crontab -e
+----
++
+Once you have the file open in an editor, add the following line to schedule an image capture every minute, replacing the `` placeholder with the username of your primary user account:
++
+----
+* * * * * /home//timelapse.sh 2>&1
+----
++
+Save and exit, and you should see this message:
++
+----
+crontab: installing new crontab
+----
++
+To stop recording images for the time lapse, run `crontab -e` again and remove the above line from your crontab.
+
+======
+
+===== Stitch images together
+
+Once you have a series of time lapse photos, you probably want to combine them into a video. Use `ffmpeg` to do this on a Raspberry Pi.
+
+First, install `ffmpeg`:
+
+[source,console]
+----
+$ sudo apt install ffmpeg
+----
+
+Run the following command from the directory that contains the `timelapse` directory to convert your JPEG files into an mp4 video:
-To capture very long exposure images, we need to be careful to disable the AEC/AGC and AWB because these algorithms will otherwise force the user to wait for a number of frames while they converge. The way to disable them is to supply explicit values. Additionally, the entire preview phase of the capture can be skipped with the `--immediate` option.
+[source,console]
+----
+$ ffmpeg -r 10 -f image2 -pattern_type glob -i 'timelapse/*.jpg' -s 1280x720 -vcodec libx264 timelapse.mp4
+----
-So to perform a 100 second exposure capture, use
+The command above uses the following parameters:
-`rpicam-still -o long_exposure.jpg --shutter 100000000 --gain 1 --awbgains 1,1 --immediate`
+* `-r 10`: sets the frame rate (Hz value) to ten frames per second in the output video
+* `-f image2`: sets `ffmpeg` to read from a list of image files specified by a pattern
+* `-pattern_type glob`: use wildcard patterns (globbing) to interpret filename input with `-i`
+* `-i 'timelapse/*.jpg'`: specifies input files to match JPG files in the `timelapse` directory
+* `-s 1280x720`: scales to 720p
+* `-vcodec libx264` use the software x264 encoder.
+* `timelapse.mp4` The name of the output video file.
-For reference, the maximum exposure times of the three official Raspberry Pi cameras can be found in xref:../accessories/camera.adoc#hardware-specification[this table].
+For more information about `ffmpeg` options, run `ffmpeg --help` in a terminal.
diff --git a/documentation/asciidoc/computers/camera/rpicam_vid.adoc b/documentation/asciidoc/computers/camera/rpicam_vid.adoc
index bd71d4a9e7..e88c5b762a 100644
--- a/documentation/asciidoc/computers/camera/rpicam_vid.adoc
+++ b/documentation/asciidoc/computers/camera/rpicam_vid.adoc
@@ -1,118 +1,98 @@
=== `rpicam-vid`
-`rpicam-vid` is the video capture application. By default it uses the Raspberry Pi's hardware H.264 encoder. It will display a preview window and write the encoded bitstream to the specified output. For example, to write a 10 second video to file use
+`rpicam-vid` helps you capture video on Raspberry Pi devices. `rpicam-vid` displays a preview window and writes an encoded bitstream to the specified output. This produces an unpackaged video bitstream that is not wrapped in any kind of container (such as an mp4 file) format.
-[,bash]
-----
-rpicam-vid -t 10000 -o test.h264
-----
-The resulting file can be played with `vlc` (among other applications)
-[,bash]
+NOTE: When available, `rpicam-vid` uses hardware H.264 encoding.
+
+For example, the following command writes a ten-second video to a file named `test.h264`:
+
+[source,console]
----
-vlc test.h264
+$ rpicam-vid -t 10s -o test.h264
----
-Note that this is an unpackaged video bitstream, it is not wrapped in any kind of container format (such as an mp4 file). The `--save-pts` option can be used to output frame timestamps so that the bitstream can subsequently be converted into an appropriate format using a tool like `mkvmerge`.
-`rpicam-vid -o test.h264 --save-pts timestamps.txt`
+You can play the resulting file with ffplay and other video players:
-and then if you want an _mkv_ file:
+[source,console]
+----
+$ ffplay test.h264
+----
-`mkvmerge -o test.mkv --timecodes 0:timestamps.txt test.h264`
+[WARNING]
+====
+Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
+====
-==== Encoders
+On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file:
-There is support for motion JPEG, and also for uncompressed and unformatted YUV420, for example
-[,bash]
+[source,console]
----
-rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg
-rpicam-vid -t 10000 --codec yuv420 -o test.data
+$ rpicam-vid -t 10s -o test.mp4
----
-In both cases the `--codec` parameter determines the output format, not the extension of the output file.
-The `--segment` parameter breaks output files up into chunks of the segment size (given in milliseconds). This is quite handy for breaking a motion JPEG stream up into individual JPEG files by specifying very short (1 millisecond) segments.
-[,bash]
+On Raspberry Pi 4, or earlier devices, you can save MP4 files using:
+
+[source,console]
----
-rpicam-vid -t 10000 --codec mjpeg --segment 1 -o test%05d.jpeg
+$ rpicam-vid -t 10s --codec libav -o test.mp4
----
-Observe that the output file name is normally only sensible if we avoid over-writing the previous file every time, such as by using a file name that includes a counter (as above). More information on output file names is available below.
-
-==== Network Streaming
-NOTE: This section describes native streaming from `rpicam-vid`. However, it is also possible to use the libav backend for network streaming. See the xref:camera_software.adoc#libav-integration-with-rpicam-vid[libav section] for further details.
+==== Encoders
-===== UDP
+`rpicam-vid` supports motion JPEG as well as both uncompressed and unformatted YUV420:
-To stream video using UDP, on the Raspberry Pi (server) use
-[,bash]
-----
-rpicam-vid -t 0 --inline -o udp://:
+[source,console]
----
-where `` is the IP address of the client, or multicast address (if appropriately configured to reach the client). On the client use (for example)
-[,bash]
+$ rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg
----
-vlc udp://@: :demux=h264
-----
-or alternatively
+
+[source,console]
----
-ffplay udp://: -fflags nobuffer -flags low_delay -framedrop
+$ rpicam-vid -t 10000 --codec yuv420 -o test.data
----
-with the same `` value.
-===== TCP
+The xref:camera_software.adoc#codec[`codec`] option determines the output format, not the extension of the output file.
-Video can be streamed using TCP. To use the Raspberry Pi as a server
-[,bash]
-----
-rpicam-vid -t 0 --inline --listen -o tcp://0.0.0.0:
-----
-and on the client
-[,bash]
-----
-vlc tcp/h264://:
-----
-or alternatively
+The xref:camera_software.adoc#segment[`segment`] option breaks output files up into chunks of the segment size (given in milliseconds). This is handy for breaking a motion JPEG stream up into individual JPEG files by specifying very short (1 millisecond) segments. For example, the following command combines segments of 1 millisecond with a counter in the output file name to generate a new filename for each segment:
+
+[source,console]
----
-ffplay tcp://: -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop
+$ rpicam-vid -t 10000 --codec mjpeg --segment 1 -o test%05d.jpeg
----
-for a 30 frames per second stream with low latency.
-The Raspberry Pi will wait until the client connects, and then start streaming video.
+==== Capture high framerate video
-===== RTSP
+To minimise frame drops for high framerate (> 60fps) video, try the following configuration tweaks:
-vlc is useful on the Raspberry Pi for formatting an RTSP stream, though there are other RTSP servers available.
-[,bash]
-----
-rpicam-vid -t 0 --inline -o - | cvlc stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/stream1}' :demux=h264
-----
-and this can be played with
-[,bash]
-----
-vlc rtsp://:8554/stream1
-----
-or alternatively
+* Set the https://en.wikipedia.org/wiki/Advanced_Video_Coding#Levels[H.264 target level] to 4.2 with `--level 4.2`.
+* Disable software colour denoise processing by setting the xref:camera_software.adoc#denoise[`denoise`] option to `cdn_off`.
+* Disable the display window with xref:camera_software.adoc#nopreview[`nopreview`] to free up some additional CPU cycles.
+* Set `force_turbo=1` in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] to ensure that the CPU clock does not throttle during video capture. For more information, see xref:config_txt.adoc#force_turbo[the `force_turbo` documentation].
+* Adjust the ISP output resolution with `--width 1280 --height 720` or something even lower to achieve your framerate target.
+* On Raspberry Pi 4, you can overclock the GPU to improve performance by adding `gpu_freq=550` or higher in `/boot/firmware/config.txt`. See xref:config_txt.adoc#overclocking[the overclocking documentation] for further details.
+
+The following command demonstrates how you might achieve 1280×720 120fps video:
+
+[source,console]
----
-ffplay rtsp://:8554/stream1 -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop
+$ rpicam-vid --level 4.2 --framerate 120 --width 1280 --height 720 --save-pts timestamp.pts -o video.264 -t 10000 --denoise cdn_off -n
----
-In all cases, the preview window on the server (the Raspberry Pi) can be suppressed with the `-n` (`--nopreview`) option. Note also the use of the `--inline` option which forces the stream header information to be included with every I (intra) frame. This is important so that a client can correctly understand the stream if it missed the very beginning.
+==== `libav` integration with `rpicam-vid`
-NOTE: Recent versions of VLC seem to have problems with playback of H.264 streams. We recommend using `ffplay` for playback using the above commands until these issues have been resolved.
+`rpicam-vid` can use the `ffmpeg`/`libav` codec backend to encode audio and video streams. You can either save these streams to a file or stream them over the network. `libav` uses hardware H.264 video encoding when present.
-==== High framerate capture
+To enable the `libav` backend, pass `libav` to the xref:camera_software.adoc#codec[`codec`] option:
-Using `rpicam-vid` to capture high framerate video (generally anything over 60 fps) while minimising frame drops requires a few considerations:
+[source,console]
+----
+$ rpicam-vid --codec libav --libav-format avi --libav-audio --output example.avi
+----
-1. The https://en.wikipedia.org/wiki/Advanced_Video_Coding#Levels[H.264 target level] must be set to 4.2 with the `--level 4.2` argument.
-2. Software colour denoise processing must be turned off with the `--denoise cdn_off` argument.
-3. For rates over 100 fps, disabling the display window with the `-n` option would free up some additional CPU cycles to help avoid frame drops.
-4. It is advisable to set `force_turbo=1` in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] to ensure the CPU clock does not get throttled during the video capture. See xref:config_txt.adoc#force_turbo[the `force_turbo` documentation] for further details.
-5. Adjust the ISP output resolution with `--width 1280 --height 720` or something even lower to achieve your framerate target.
-6. On a Pi 4, you can overclock the GPU to improve performance by adding `gpu_freq=550` or higher in `/boot/firmware/config.txt`. See xref:config_txt.adoc#overclocking[the overclocking documentation] for further details.
+==== Low latency video with the Pi 5
-An example command for 1280x720 120fps video encode would be:
+Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications.
-[,bash]
-----
-rpicam-vid --level 4.2 --framerate 120 --width 1280 --height 720 --save-pts timestamp.pts -o video.264 -t 10000 --denoise cdn_off -n
-----
\ No newline at end of file
+In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly.
+
+The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30).
diff --git a/documentation/asciidoc/computers/camera/streaming.adoc b/documentation/asciidoc/computers/camera/streaming.adoc
new file mode 100644
index 0000000000..ffcf9a6569
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/streaming.adoc
@@ -0,0 +1,206 @@
+== Stream video over a network with `rpicam-apps`
+
+This section describes how to stream video over a network using `rpicam-vid`. Whilst it's possible to stream very simple formats without using `libav`, for most applications we recommend using the xref:camera_software.adoc#libav-integration-with-rpicam-vid[`libav` backend].
+
+=== UDP
+
+To stream video over UDP using a Raspberry Pi as a server, use the following command, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --inline -o udp://:
+----
+
+To view video streamed over UDP using a Raspberry Pi as a client, use the following command, replacing the `` placeholder with the port you would like to stream from:
+
+[source,console]
+----
+$ ffplay udp://@: -fflags nobuffer -flags low_delay -framedrop
+----
+As noted previously, `vlc` no longer handles unencapsulated H.264 streams.
+
+In fact, support for unencapsulated H.264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o udp://:
+----
+
+In this case, we can also play the stream successfully with `vlc`:
+
+[source,console]
+----
+$ vlc udp://@:
+----
+
+=== TCP
+
+You can also stream video over TCP. As before, we can send an unencapsulated H.264 stream over the network. To use a Raspberry Pi as a server:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --inline --listen -o tcp://0.0.0.0:
+----
+
+To view video streamed over TCP using a Raspberry Pi as a client, assuming the server is running at 30 frames per second, use the following command:
+
+[source,console]
+----
+$ ffplay tcp://: -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop
+----
+
+But as with the UDP examples, it is often preferable to send an MPEG-2 Transport Stream as this is generally better supported. To do this, use:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o tcp://0.0.0.0:?listen=1
+----
+
+We can now play this back using a variety of media players, including `vlc`:
+
+[source,console]
+----
+$ vlc tcp://:
+----
+
+=== RTSP
+
+We can use VLC as an RTSP server, however, we must send it an MPEG-2 Transport Stream as it no longer understands unencapsulated H.264:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o - | cvlc stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/stream1}'
+----
+
+To view video streamed over RTSP using a Raspberry Pi as a client, use the following command:
+
+[source,console]
+----
+$ ffplay rtsp://:8554/stream1 -fflags nobuffer -flags low_delay -framedrop
+----
+
+Alternatively, use the following command on a client to stream using VLC:
+
+[source,console]
+----
+$ vlc rtsp://:8554/stream1
+----
+
+If you want to see a preview window on the server, just drop the `-n` option (see xref:camera_software.adoc#nopreview[`nopreview`]).
+
+=== `libav` and Audio
+
+We have already been using `libav` as the backend for network streaming. `libav` allows us to add an audio stream, so long as we're using a format - like the MPEG-2 Transport Stream - that permits audio data.
+
+We can take one of our previous commands, like the one for streaming an MPEG-2 Transport Stream over TCP, and simply add the `--libav-audio` option:
+
+[source,console]
+----
+$ rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "tcp://:?listen=1"
+----
+
+You can stream over UDP with a similar command:
+
+[source,console]
+----
+$ rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "udp://:"
+----
+
+=== GStreamer
+
+https://gstreamer.freedesktop.org/[GStreamer] is a Linux framework for reading, processing and playing multimedia files. We can also use it in conjunction with `rpicam-vid` for network streaming.
+
+This setup uses `rpicam-vid` to output an H.264 bitstream to stdout, though as we've done previously, we're going to encapsulate it in an MPEG-2 Transport Stream for better downstream compatibility.
+
+Then, we use the GStreamer `fdsrc` element to receive the bitstream, and extra GStreamer elements to send it over the network. On the server, run the following command to start the stream, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o - | gst-launch-1.0 fdsrc fd=0 ! udpsink host= port=
+----
+
+We could of course use anything (such as vlc) as the client, and the best GStreamer clients for playback are beyond the scope of this document. However, we note that the following pipeline (with the obvious substitutions) would work on a Pi 4 or earlier device:
+
+[source,console]
+----
+$ gst-launch-1.0 udpsrc address= port= ! tsparse ! tsdemux ! h264parse ! queue ! v4l2h264dec ! autovideosink
+----
+
+For a Pi 5, replace `v4l2h264dec` by `avdec_h264`.
+
+TIP: To test this configuration, run the server and client commands in separate terminals on the same device, using `localhost` as the address.
+
+==== `libcamerasrc` GStreamer element
+
+`libcamera` provides a `libcamerasrc` GStreamer element which can be used directly instead of `rpicam-vid`. To use this element, run the following command on the server, replacing the `` placeholder with the IP address of the client or multicast address and replacing the `` placeholder with the port you would like to use for streaming. On a Pi 4 or earlier device, use:
+
+[source,console]
+----
+$ gst-launch-1.0 libcamerasrc ! capsfilter caps=video/x-raw,width=640,height=360,format=NV12,interlace-mode=progressive ! v4l2h264enc extra-controls="controls,repeat_sequence_header=1" ! 'video/x-h264,level=(string)4' ! h264parse ! mpegtsmux ! udpsink host= port=
+----
+On a Pi 5 you would have to replace `v4l2h264enc extra-controls="controls,repeat_sequence_header=1"` by `x264enc speed-preset=1 threads=1`.
+
+On the client we could use the same playback pipeline as we did just above, or other streaming media players.
+
+=== WebRTC
+
+Streaming over WebRTC (for example, to web browsers) is best accomplished using third party software. https://github.com/bluenviron/mediamtx[MediaMTX], for example, includes native Raspberry Pi camera support which makes it easy to use.
+
+To install it, download the latest version from the https://github.com/bluenviron/mediamtx/releases[releases] page. Raspberry Pi OS 64-bit users will want the "linux_arm64v8" compressed tar file (ending `.tar.gz`). Unpack it and you will get a `mediamtx` executable and a configuration file called `mediamtx.yml`.
+
+It's worth backing up the `mediamtx.yml` file because it documents many Raspberry Pi camera options that you may want to investigate later.
+
+To stream the camera, replace the contents of `mediamtx.yml` by:
+----
+paths:
+ cam:
+ source: rpiCamera
+----
+and start the `mediamtx` executable. On a browser, enter `http://:8889/cam` into the address bar.
+
+If you want MediaMTX to acquire the camera only when the stream is requested, add the following line to the previous `mediamtx.yml`:
+----
+ sourceOnDemand: yes
+----
+Consult the original `mediamtx.yml` for additional configuration parameters that let you select the image size, the camera mode, the bitrate and so on - just search for `rpi`.
+
+==== Customised image streams with WebRTC
+
+MediaMTX is great if you want to stream just the camera images. But what if we want to add some extra information or overlay, or do some extra processing on the images?
+
+Before starting, ensure that you've built a version of `rpicam-apps` that includes OpenCV support. Check it by running
+
+[source,console]
+----
+$ rpicam-hello --post-process-file rpicam-apps/assets/annotate_cv.json
+----
+and looking for the overlaid text information at the top of the image.
+
+Next, paste the following into your `mediamtx.yml` file:
+----
+paths:
+ cam:
+ source: udp://127.0.0.1:1234
+----
+
+Now, start `mediamtx` and then, if you're using a Pi 5, in a new terminal window, enter:
+
+[source,console]
+----
+$ rpicam-vid -t 0 -n --codec libav --libav-video-codec-opts "profile=baseline" --libav-format mpegts -o udp://127.0.0.1:1234?pkt_size=1316 --post-process-file rpicam-apps/assets/annotate_cv.json
+----
+(On a Pi 4 or earlier device, leave out the `--libav-video-codec-opts "profile=baseline"` part of the command.)
+
+On another computer, you can now visit the same address as before, namely `http://:8889/cam`.
+
+The reason for specifying "baseline" profile on a Pi 5 is that MediaMTX doesn't support B frames, so we need to stop the encoder from producing them. On earlier devices, with hardware encoders, B frames are never generated so there is no issue. On a Pi 5 you could alternatively remove this option and replace it with `--low-latency` which will also prevent B frames, and produce a (slightly less well compressed) stream with reduced latency.
+
+[NOTE]
+====
+If you notice occasional pauses in the video stream, this may be because the UDP receive buffers on the Pi (passing data from `rpicam-vid` to MediaMTX) are too small. To increase them permantently, add
+----
+net.core.rmem_default=1000000
+net.core.rmem_max=1000000
+----
+to your `/etc/sysctl.conf` file (and reboot or run `sudo sysctl -p`).
+====
\ No newline at end of file
diff --git a/documentation/asciidoc/computers/camera/timelapse.adoc b/documentation/asciidoc/computers/camera/timelapse.adoc
deleted file mode 100644
index 6393a3df4f..0000000000
--- a/documentation/asciidoc/computers/camera/timelapse.adoc
+++ /dev/null
@@ -1,92 +0,0 @@
-== Application Notes
-
-=== Creating Timelapse Video
-
-To create a time-lapse video, you simply configure the Raspberry Pi to take a picture at a regular interval, such as once a minute, then use an application to stitch the pictures together into a video.
-
-==== Using `rpicam-still` Timelapse Mode
-
-`rpicam-still` has a built in time-lapse mode, using the `--timelapse` command line switch. The value that follows the switch is the time between shots in milliseconds:
-
-----
-rpicam-still -t 30000 --timelapse 2000 -o image%04d.jpg
-----
-
-[NOTE]
-====
-The `%04d` in the output filename: this indicates the point in the filename where you want a frame count number to appear. So, for example, the command above will produce a capture every two seconds (2000ms), over a total period of 30 seconds (30000ms), named image0001.jpg, image0002.jpg, and so on, through to image0015.jpg.
-
-The `%04d` indicates a four-digit number, with leading zeros added to make up the required number of digits. So, for example, `%08d` would result in an eight-digit number. You can miss out the `0` if you don't want leading zeros.
-
-If a timelapse value of 0 is entered, the application will take pictures as fast as possible. Note that there's an minimum enforced pause of approximately 30 milliseconds between captures to ensure that exposure calculations can be made.
-====
-
-==== Automating using `cron` Jobs
-
-A good way to automate taking a picture at a regular interval is running a script with `cron`. First create the script that we'll be using with your editor of choice, replacing the `` placeholder below with the name of the user you created during first boot:
-
-----
-#!/bin/bash
-DATE=$(date +"%Y-%m-%d_%H%M")
-rpicam-still -o /home//camera/$DATE.jpg
-----
-
-and save it as `camera.sh`. You'll need to make the script executable:
-
-----
-$ chmod +x camera.sh
-----
-
-and also create the `camera` directory into which you'll be saving the pictures:
-
-----
-$ mkdir camera
-----
-
-Now open the cron table for editing:
-
-----
-$ crontab -e
-----
-
-This will either ask which editor you would like to use, or open in your default editor. Once you have the file open in an editor, add the following line to schedule taking a picture every minute, replacing the `` placeholder with the username of your primary user account:
-
-----
-* * * * * /home//camera.sh 2>&1
-----
-
-Save and exit and you should see the message:
-
-----
-crontab: installing new crontab
-----
-
-Make sure that you use e.g. `%04d` to ensure that each image is written to a new file: if you don't, then each new image will overwrite the previous file.
-
-==== Stitching Images Together
-
-Now you'll need to stitch the photos together into a video. You can do this on the Raspberry Pi using `ffmpeg` but the processing will be slow. You may prefer to transfer the image files to your desktop computer or laptop and produce the video there.
-
-First you will need to install `ffmpeg` if it's not already installed.
-
-----
-sudo apt install ffmpeg
-----
-
-Now you can use the `ffmpeg` tool to convert your JPEG files into an mp4 video:
-
-----
-ffmpeg -r 10 -f image2 -pattern_type glob -i 'image*.jpg' -s 1280x720 -vcodec libx264 timelapse.mp4
-----
-
-On a Raspberry Pi 3, this can encode a little more than two frames per second. The performance of other Raspberry Pi models will vary. The parameters used are:
-
-* `-r 10` Set frame rate (Hz value) to ten frames per second in the output video.
-* `-f image2` Set ffmpeg to read from a list of image files specified by a pattern.
-* `-pattern_type glob` When importing the image files, use wildcard patterns (globbing) to interpret the filename input by `-i`, in this case `image*.jpg`, where `*` would be the image number.
-* `-i 'image*.jpg'` The input file specification (to match the files produced during the capture).
-* `-s 1280x720` Scale to 720p. You can also use 1920x1080, or lower resolutions, depending on your requirements.
-* `-vcodec libx264` Use the software x264 encoder.
-* `timelapse.mp4` The name of the output video file.
-
-`ffmpeg` has a comprehensive parameter set for varying encoding options and other settings. These can be listed using `ffmpeg --help`.
diff --git a/documentation/asciidoc/computers/camera/troubleshooting.adoc b/documentation/asciidoc/computers/camera/troubleshooting.adoc
new file mode 100644
index 0000000000..4c94ce12f8
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/troubleshooting.adoc
@@ -0,0 +1,16 @@
+== Troubleshooting
+
+If your Camera Module doesn't work like you expect, try some of the following fixes:
+
+* On Raspberry Pi 3 and earlier devices running Raspberry Pi OS _Bullseye_ or earlier:
+** To enable hardware-accelerated camera previews, enable *Glamor*. To enable Glamor, enter `sudo raspi-config` in a terminal, select `Advanced Options` > `Glamor` > `Yes`. Then reboot your Raspberry Pi with `sudo reboot`.
+** If you see an error related to the display driver, add `dtoverlay=vc4-fkms-v3d` or `dtoverlay=vc4-kms-v3d` to `/boot/config.txt`. Then reboot your Raspberry Pi with `sudo reboot`.
+* On Raspberry Pi 3 and earlier, the graphics hardware can only support images up to 2048×2048 pixels, which places a limit on the camera images that can be resized into the preview window. As a result, video encoding of images larger than 2048 pixels wide produces corrupted or missing preview images.
+* On Raspberry Pi 4, the graphics hardware can only support images up to 4096×4096 pixels, which places a limit on the camera images that can be resized into the preview window. As a result, video encoding of images larger than 4096 pixels wide produces corrupted or missing preview images.
+* The preview window may show display tearing in a desktop environment. This is a known, unfixable issue.
+* Check that the FFC (Flat Flexible Cable) is firmly seated, fully inserted, and that the contacts face the correct direction. The FFC should be evenly inserted, not angled.
+* If you use a connector between the camera and your Raspberry Pi, check that the ports on the connector are firmly seated, fully inserted, and that the contacts face the correct direction.
+* Check to make sure that the FFC (Flat Flexible Cable) is attached to the CSI (Camera Serial Interface), _not_ the DSI (Display Serial Interface). The connector fits into either port, but only the CSI port powers and controls the camera. Look for the `CSI` label printed on the board near the port.
+* xref:os.adoc#update-software[Update to the latest software.]
+* Try a different power supply. The Camera Module adds about 200-250mA to the power requirements of your Raspberry Pi. If your power supply is low quality, your Raspberry Pi may not be able to power the Camera module.
+* If you've checked all the above issues and your Camera Module still doesn't work like you expect, try posting on our forums for more help.
diff --git a/documentation/asciidoc/computers/camera/v4l2.adoc b/documentation/asciidoc/computers/camera/v4l2.adoc
index bcaafc10c2..7cc2ceabcc 100644
--- a/documentation/asciidoc/computers/camera/v4l2.adoc
+++ b/documentation/asciidoc/computers/camera/v4l2.adoc
@@ -1,44 +1,44 @@
-== V4L2 Drivers
+== V4L2 drivers
-V4L2 drivers provide a standard Linux interface for accessing camera and codec features. They are loaded automatically when the system is started, though in some non-standard situations you may need to xref:camera_software.adoc#if-you-do-need-to-alter-the-configuration[load camera drivers explicitly].
+V4L2 drivers provide a standard Linux interface for accessing camera and codec features. Normally, Linux loads drivers automatically during boot. But in some situations you may need to xref:camera_software.adoc#configuration[load camera drivers explicitly].
=== Device nodes when using `libcamera`
[cols="1,^3"]
|===
-| /dev/videoX | Default Action
+| /dev/videoX | Default action
-| video0
-| Unicam driver for the first CSI-2 receiver.
+| `video0`
+| Unicam driver for the first CSI-2 receiver
-| video1
-| Unicam driver for the second CSI-2 receiver.
+| `video1`
+| Unicam driver for the second CSI-2 receiver
-| video10
-| Video decode.
+| `video10`
+| Video decode
-| video11
-| Video encode.
+| `video11`
+| Video encode
-| video12
-| Simple ISP. Can perform conversion and resizing between RGB/YUV formats, and also Bayer to RGB/YUV conversion.
+| `video12`
+| Simple ISP, can perform conversion and resizing between RGB/YUV formats in addition to Bayer to RGB/YUV conversion
-| video13
-| Input to fully programmable ISP.
+| `video13`
+| Input to fully programmable ISP
-| video14
-| High resolution output from fully programmable ISP.
+| `video14`
+| High resolution output from fully programmable ISP
-| video15
-| Low result output from fully programmable ISP.
+| `video15`
+| Low result output from fully programmable ISP
-| video16
-| Image statistics from fully programmable ISP.
+| `video16`
+| Image statistics from fully programmable ISP
-| video19
-| HEVC Decode
+| `video19`
+| HEVC decode
|===
-=== Using the Driver
+=== Use the V4L2 drivers
-Please see the https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/v4l2.html[V4L2 documentation] for details on using this driver.
+For more information on how to use the V4L2 drivers, see the https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/v4l2.html[V4L2 documentation].
diff --git a/documentation/asciidoc/computers/camera/webcams.adoc b/documentation/asciidoc/computers/camera/webcams.adoc
new file mode 100644
index 0000000000..dbfe0c8e4c
--- /dev/null
+++ b/documentation/asciidoc/computers/camera/webcams.adoc
@@ -0,0 +1,169 @@
+== Use a USB webcam
+
+Most Raspberry Pi devices have dedicated ports for camera modules. Camera modules are high-quality, highly-configurable cameras popular with Raspberry Pi users.
+
+However, for many purposes a USB webcam has everything you need to record pictures and videos from your Raspberry Pi. This section explains how to use a USB webcam with your Raspberry Pi.
+
+=== Install dependencies
+
+First, install the `fswebcam` package:
+
+[source,console]
+----
+$ sudo apt install fswebcam
+----
+
+Next, add your username to the `video` group, otherwise you may see 'permission denied' errors:
+
+[source,console]
+----
+$ sudo usermod -a -G video
+----
+
+To check that the user has been added to the group correctly, use the `groups` command.
+
+=== Take a photo
+
+Run the following command to take a picture using the webcam and save the image to a filename named `image.jpg`:
+
+[source,console]
+----
+$ fswebcam image.jpg
+----
+
+You should see output similar to the following:
+
+----
+--- Opening /dev/video0...
+Trying source module v4l2...
+/dev/video0 opened.
+No input was specified, using the first.
+Adjusting resolution from 384x288 to 352x288.
+--- Capturing frame...
+Corrupt JPEG data: 2 extraneous bytes before marker 0xd4
+Captured frame in 0.00 seconds.
+--- Processing captured image...
+Writing JPEG image to 'image.jpg'.
+----
+
+.By default, `fswebcam` uses a low resolution and adds a banner displaying a timestamp.
+image::images/webcam-image.jpg[By default, `fswebcam` uses a low resolution and adds a banner displaying a timestamp]
+
+To specify a different resolution for the captured image, use the `-r` flag, passing a width and height as two numbers separated by an `x`:
+
+[source,console]
+----
+$ fswebcam -r 1280x720 image2.jpg
+----
+
+You should see output similar to the following:
+
+----
+--- Opening /dev/video0...
+Trying source module v4l2...
+/dev/video0 opened.
+No input was specified, using the first.
+--- Capturing frame...
+Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
+Captured frame in 0.00 seconds.
+--- Processing captured image...
+Writing JPEG image to 'image2.jpg'.
+----
+
+.Specify a resolution to capture a higher quality image.
+image::images/webcam-image-high-resolution.jpg[Specify a resolution to capture a higher quality image]
+
+==== Remove the banner
+
+To remove the banner from the captured image, use the `--no-banner` flag:
+
+[source,console]
+----
+$ fswebcam --no-banner image3.jpg
+----
+
+You should see output similar to the following:
+
+----
+--- Opening /dev/video0...
+Trying source module v4l2...
+/dev/video0 opened.
+No input was specified, using the first.
+--- Capturing frame...
+Corrupt JPEG data: 2 extraneous bytes before marker 0xd6
+Captured frame in 0.00 seconds.
+--- Processing captured image...
+Disabling banner.
+Writing JPEG image to 'image3.jpg'.
+----
+
+.Specify `--no-banner` to save the image without the timestamp banner.
+image::images/webcam-image-no-banner.jpg[Specify `--no-banner` to save the image without the timestamp banner]
+
+=== Automate image capture
+
+Unlike xref:camera_software.adoc#rpicam-apps[`rpicam-apps`], `fswebcam` doesn't have any built-in functionality to substitute timestamps and numbers in output image names. This can be useful when capturing multiple images, since manually editing the file name every time you record an image can be tedious. Instead, use a Bash script to implement this functionality yourself.
+
+Create a new file named `webcam.sh` in your home folder. Add the following example code, which uses the `bash` programming language to save images to files with a file name containing the year, month, day, hour, minute, and second:
+
+[,bash]
+----
+#!/bin/bash
+
+DATE=$(date +"%Y-%m-%d_%H-%M-%S")
+
+fswebcam -r 1280x720 --no-banner $DATE.jpg
+----
+
+Then, make the bash script executable by running the following command:
+
+[source,console]
+----
+$ chmod +x webcam.sh
+----
+
+Run the script with the following command to capture an image and save it to a file with a timestamp for a name, similar to `2024-05-10_12-06-33.jpg`:
+
+[source,console]
+----
+$ ./webcam.sh
+----
+
+You should see output similar to the following:
+
+----
+--- Opening /dev/video0...
+Trying source module v4l2...
+/dev/video0 opened.
+No input was specified, using the first.
+--- Capturing frame...
+Corrupt JPEG data: 2 extraneous bytes before marker 0xd6
+Captured frame in 0.00 seconds.
+--- Processing captured image...
+Disabling banner.
+Writing JPEG image to '2024-05-10_12-06-33.jpg'.
+----
+
+=== Capture a time lapse
+
+Use `cron` to schedule photo capture at a given interval. With the right interval, such as once a minute, you can capture a time lapse.
+
+First, open the cron table for editing:
+
+[source,console]
+----
+$ crontab -e
+----
+
+Once you have the file open in an editor, add the following line to the schedule to take a picture every minute, replacing `` with your username:
+
+[,bash]
+----
+* * * * * /home//webcam.sh 2>&1
+----
+
+Save and exit, and you should see the following message:
+
+----
+crontab: installing new crontab
+----
diff --git a/documentation/asciidoc/computers/camera_software.adoc b/documentation/asciidoc/computers/camera_software.adoc
index 1d0581a995..a234811a7e 100644
--- a/documentation/asciidoc/computers/camera_software.adoc
+++ b/documentation/asciidoc/computers/camera_software.adoc
@@ -2,8 +2,6 @@ include::camera/camera_usage.adoc[]
include::camera/rpicam_apps_intro.adoc[]
-include::camera/rpicam_apps_getting_started.adoc[]
-
include::camera/rpicam_hello.adoc[]
include::camera/rpicam_jpeg.adoc[]
@@ -12,19 +10,27 @@ include::camera/rpicam_still.adoc[]
include::camera/rpicam_vid.adoc[]
-include::camera/rpicam_apps_libav.adoc[]
-
include::camera/rpicam_raw.adoc[]
include::camera/rpicam_detect.adoc[]
+include::camera/rpicam_configuration.adoc[]
+
+include::camera/rpicam_apps_multicam.adoc[]
+
+include::camera/rpicam_apps_packages.adoc[]
+
+include::camera/streaming.adoc[]
+
include::camera/rpicam_options_common.adoc[]
include::camera/rpicam_options_still.adoc[]
include::camera/rpicam_options_vid.adoc[]
-include::camera/libcamera_differences.adoc[]
+include::camera/rpicam_options_libav.adoc[]
+
+include::camera/rpicam_options_detect.adoc[]
include::camera/rpicam_apps_post_processing.adoc[]
@@ -34,28 +40,22 @@ include::camera/rpicam_apps_post_processing_tflite.adoc[]
include::camera/rpicam_apps_post_processing_writing.adoc[]
-include::camera/rpicam_apps_multicam.adoc[]
-
-include::camera/rpicam_apps_packages.adoc[]
-
include::camera/rpicam_apps_building.adoc[]
include::camera/rpicam_apps_writing.adoc[]
-include::camera/libcamera_python.adoc[]
-
-include::camera/libcamera_3rd_party_tuning.adoc[]
+include::camera/qt.adoc[]
-include::camera/libcamera_known_issues.adoc[]
+include::camera/libcamera_python.adoc[]
-include::camera/rpicam_apps_getting_help.adoc[]
+include::camera/webcams.adoc[]
-include::camera/timelapse.adoc[]
+include::camera/v4l2.adoc[]
-include::camera/gstreamer.adoc[]
+include::camera/csi-2-usage.adoc[]
-include::camera/qt.adoc[]
+include::camera/libcamera_differences.adoc[]
-include::camera/v4l2.adoc[]
+include::camera/troubleshooting.adoc[]
-include::camera/csi-2-usage.adoc[]
+include::camera/rpicam_apps_getting_help.adoc[]
diff --git a/documentation/asciidoc/computers/compute-module.adoc b/documentation/asciidoc/computers/compute-module.adoc
index 05c090516d..97810c8bc8 100644
--- a/documentation/asciidoc/computers/compute-module.adoc
+++ b/documentation/asciidoc/computers/compute-module.adoc
@@ -1,15 +1,13 @@
-include::compute-module/datasheet.adoc[]
-
-include::compute-module/designfiles.adoc[]
+include::compute-module/introduction.adoc[]
include::compute-module/cm-emmc-flashing.adoc[]
+include::compute-module/cm-bootloader.adoc[]
+
include::compute-module/cm-peri-sw-guide.adoc[]
include::compute-module/cmio-camera.adoc[]
include::compute-module/cmio-display.adoc[]
-
-
-
+include::compute-module/datasheet.adoc[]
diff --git a/documentation/asciidoc/computers/compute-module/cm-bootloader.adoc b/documentation/asciidoc/computers/compute-module/cm-bootloader.adoc
new file mode 100644
index 0000000000..aea936e1a3
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/cm-bootloader.adoc
@@ -0,0 +1,55 @@
+== Compute Module EEPROM bootloader
+
+Since Compute Module 4, Compute Modules use an EEPROM bootloader. This bootloader lives in a small segment of on-board storage instead of the boot partition. As a result, it requires different procedures to update. Before using a Compute Module with an EEPROM bootloader in production, always follow these best practices:
+
+* Select a specific bootloader release. Verify that every Compute Module you use has that release. The version in the `usbboot` repo is always a recent stable release.
+* Configure the boot device by xref:raspberry-pi.adoc#raspberry-pi-bootloader-configuration[setting the `BOOT_ORDER` ].
+* Enable hardware write-protection on the bootloader EEPROM to ensure that the bootloader can't be modified on inaccessible products (such as remote or embedded devices).
+
+=== Flash Compute Module bootloader EEPROM
+
+To flash the bootloader EEPROM:
+
+. Set up the hardware as you would when xref:../computers/compute-module.adoc#flash-compute-module-emmc[flashing the eMMC], but ensure `EEPROM_nWP` is _not_ pulled low.
+. Run the following command to write `recovery/pieeprom.bin` to the bootloader EEPROM:
++
+[source,console]
+----
+$ ./rpiboot -d recovery
+----
+. Once complete, `EEPROM_nWP` may be pulled low again.
+
+=== Flash storage devices other than SD cards
+
+The Linux-based https://github.com/raspberrypi/usbboot/blob/master/mass-storage-gadget/README.md[`mass-storage-gadget`] supports flashing of NVMe, eMMC and USB block devices. `mass-storage-gadget` writes devices faster than the firmware-based `rpiboot` mechanism, and also provides a UART console to the device for debugging.
+
+`usbboot` also includes a number of https://github.com/raspberrypi/usbboot/blob/master/Readme.md#compute-module-4-extensions[extensions] that enable you to interact with the EEPROM bootloader on a Compute Module.
+
+=== Update the Compute Module bootloader
+
+On Compute Modules with an EEPROM bootloader, ROM never runs `recovery.bin` from SD/eMMC. These Compute Modules disable the `rpi-eeprom-update` service by default, because eMMC is not removable and an invalid `recovery.bin` file could prevent the system from booting.
+
+You can override this behaviour with `self-update` mode. In `self-update` mode, you can update the bootloader from USB MSD or network boot.
+
+WARNING: `self-update` mode does not update the bootloader atomically. If a power failure occurs during an EEPROM update, you could corrupt the EEPROM.
+
+=== Modify the bootloader configuration
+
+To modify the Compute Module EEPROM bootloader configuration:
+
+. Navigate to the `usbboot/recovery` directory.
+. If you require a specific bootloader release, replace `pieeprom.original.bin` with the equivalent from your bootloader release.
+. Edit the default `boot.conf` bootloader configuration file to define a xref:../computers/raspberry-pi.adoc#BOOT_ORDER[`BOOT_ORDER`]:
+ * For network boot, use `BOOT_ORDER=0xf2`.
+ * For SD/eMMC boot, use `BOOT_ORDER=0xf1`.
+ * For USB boot failing over to eMMC, use `BOOT_ORDER=0xf15`.
+ * For NVMe boot, use `BOOT_ORDER=0xf6`.
+. Run `./update-pieeprom.sh` to generate a new EEPROM image `pieeprom.bin` image file.
+. If you require EEPROM write-protection, add `eeprom_write_protect=1` to `/boot/firmware/config.txt`.
+ * Once enabled in software, you can lock hardware write-protection by pulling the `EEPROM_nWP` pin low.
+. Run the following command to write the updated `pieeprom.bin` image to EEPROM:
++
+[source,console]
+----
+$ ../rpiboot -d .
+----
diff --git a/documentation/asciidoc/computers/compute-module/cm-emmc-flashing.adoc b/documentation/asciidoc/computers/compute-module/cm-emmc-flashing.adoc
index e9bfbc33a0..664dd97c0d 100644
--- a/documentation/asciidoc/computers/compute-module/cm-emmc-flashing.adoc
+++ b/documentation/asciidoc/computers/compute-module/cm-emmc-flashing.adoc
@@ -1,144 +1,164 @@
-== Flashing the Compute Module eMMC
+[[flash-compute-module-emmc]]
+== Flash an image to a Compute Module
-[.whitepaper, title="Using the Compute Module Provisioner", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003468-WP/Using-the-Compute-Module-Provisioner.pdf]
-****
-The CM Provisioner is a web application designed to make programming a large number of Raspberry Pi Compute Module (CM) devices much easier and quicker. It is simple to install and simple to use.
+TIP: To flash the same image to multiple Compute Modules, use the https://github.com/raspberrypi/rpi-sb-provisioner[Raspberry Pi Secure Boot Provisioner]. To customise an OS image to flash onto those devices, use https://github.com/RPi-Distro/pi-gen[pi-gen].
-It provides an interface to a database of kernel images that can be uploaded, along with the ability to use scripts to customise various parts of the installation during the flashing process. Label printing and firmware updating is also supported.
-****
+[[flashing-the-compute-module-emmc]]
-The Compute Module has an on-board eMMC device connected to the primary SD card interface. This guide explains how to write data to the eMMC storage using a Compute Module IO board.
+The Compute Module has an on-board eMMC device connected to the primary SD card interface. This guide explains how to flash (write) an operating system image to the eMMC storage of a single Compute Module.
-Please also read the section in the xref:compute-module.adoc#datasheets-and-schematics[Compute Module Datasheets]
+**Lite** variants of Compute Modules do not have on-board eMMC. Instead, follow the procedure to flash a storage device for other Raspberry Pi devices at xref:../computers/getting-started.adoc#installing-the-operating-system[Install an operating system].
-IMPORTANT: For mass provisioning of CM3, CM3+ and CM4 the https://github.com/raspberrypi/cmprovision[Raspberry Pi Compute Module Provisioning System] is recommended.
+=== Prerequisites
-=== Steps to Flash the eMMC
+To flash the Compute Module eMMC, you need the following:
-To flash the Compute Module eMMC, you either need a Linux system (a Raspberry Pi is recommended, or Ubuntu on a PC) or a Windows system (Windows 10 is recommended). For BCM2837 (CM3), a bug which affected the Mac has been fixed, so this will also work.
+* Another computer, referred to in this guide as the *host device*. You can use Linux (we recommend Raspberry Pi OS or Ubuntu), Windows 11, or macOS.
+* The Compute Module IO Board xref:compute-module.adoc#io-board-compatibility[that corresponds to your Compute Module model].
+* A micro USB cable, or a USB-C cable for Compute Module models since CM5IO.
-NOTE: There is a bug in the BCM2835 (CM1) bootloader which returns a slightly incorrect USB packet to the host. Most USB hosts seem to ignore this benign bug and work fine; we do, however, see some USB ports that don't work due to this bug. We don't quite understand why some ports fail, as it doesn't seem to be correlated with whether they are USB2 or USB3 (we have seen both types working), but it's likely to be specific to the host controller and driver. This bug has been fixed in BCM2837.
+TIP: In some cases, USB hubs can prevent the host device from recognising the Compute Module. If your host device does not recognise the Compute Module, try connecting the Compute Module directly to the host device. For more diagnostic tips, see https://github.com/raspberrypi/usbboot?tab=readme-ov-file#troubleshooting[the usbboot troubleshooting guide].
-=== Setting up the CMIO board
+=== Set up the IO Board
-==== Compute Module 4
+To begin, physically set up your IO Board. This includes connecting the Compute Module and host device to the IO Board.
-Ensure the Compute Module is fitted correctly installed on the IO board. It should lie flat on the IO board.
+[tabs]
+======
+Compute Module 5 IO Board::
++
+To set up the Compute Module 5 IO Board:
++
+. Connect the Compute Module to the IO board. When connected, the Compute Module should lie flat.
+. Fit `nRPI_BOOT` to J2 (`disable eMMC Boot`) on the IO board jumper.
+. Connect a cable from USB-C slave port J11 on the IO board to the host device.
-* Make sure that `nRPI_BOOT` which is on J2 (`disable eMMC Boot`) on the IO board jumper is fitted
-* Use a micro USB cable to connect the micro USB slave port J11 on IO board to the host device.
-* Do not power up yet.
+Compute Module 4 IO Board::
++
+To set up the Compute Module 4 IO Board:
++
+. Connect the Compute Module to the IO board. When connected, the Compute Module should lie flat.
+. Fit `nRPI_BOOT` to J2 (`disable eMMC Boot`) on the IO board jumper.
+. Connect a cable from micro USB slave port J11 on the IO board to the host device.
-==== Compute Module 1 and 3
+Compute Module IO Board::
++
+To set up the Compute Module IO Board:
++
+. Connect the Compute Module to the IO board. When connected, the Compute Module should lie parallel to the board, with the engagement clips firmly clicked into place.
+. Set J4 (`USB SLAVE BOOT ENABLE`) to 1-2 = (`USB BOOT ENABLED`)
+. Connect a cable from micro USB slave port J15 on the IO board to the host device.
+======
-Ensure the Compute Module itself is correctly installed on the IO board. It should lie parallel with the board, with the engagement clips clicked into place.
+=== Set up the host device
-* Make sure that J4 (USB SLAVE BOOT ENABLE) is set to the 'EN' position.
-* Use a micro USB cable to connect the micro USB slave port J15 on IO board to the host device.
-* Do not power up yet.
+Next, let's set up software on the host device.
-==== For Windows Users
+TIP: For a host device, we recommend a Raspberry Pi 4 or newer running 64-bit Raspberry Pi OS.
-Under Windows, an installer is available to install the required drivers and boot tool automatically. Alternatively, a user can compile and run it using Cygwin and/or install the drivers manually.
-
-==== Windows Installer
-
-For those who just want to enable the Compute Module eMMC as a mass storage device under Windows, the stand-alone installer is the recommended option. This installer has been tested on Windows 10 64-bit.
-
-Please ensure you are not writing to any USB devices whilst the installer is running.
-
-. Download and run the https://github.com/raspberrypi/usbboot/raw/master/win32/rpiboot_setup.exe[Windows installer] to install the drivers and boot tool.
-. Plug your host PC USB into the USB SLAVE port, making sure you have setup the board as described above.
-. Apply power to the board; Windows should now find the hardware and install the driver.
-. Once the driver installation is complete, run the `RPiBoot.exe` tool that was previously installed.
-. After a few seconds, the Compute Module eMMC will pop up under Windows as a disk (USB mass storage device).
-
-==== Building `rpiboot` on your host system.
-
-Instructions for building and running the latest release of `rpiboot` are documented in the https://github.com/raspberrypi/usbboot/blob/master/Readme.md#building[usbboot readme] on Github.
+[tabs]
+======
+Linux::
++
+To set up software on a Linux host device:
++
+. Run the following command to install `rpiboot` (or, alternatively, https://github.com/raspberrypi/usbboot[build `rpiboot` from source]):
++
+[source,console]
+----
+$ sudo apt install rpiboot
+----
+. Connect the IO Board to power.
+. Then, run `rpiboot`:
++
+[source,console]
+----
+$ sudo rpiboot
+----
+. After a few seconds, the Compute Module should appear as a mass storage device. Check the `/dev/` directory, likely `/dev/sda` or `/dev/sdb`, for the device. Alternatively, run `lsblk` and search for a device with a storage capacity that matches the capacity of your Compute Module.
+
+macOS::
++
+To set up software on a macOS host device:
++
+. First, https://github.com/raspberrypi/usbboot?tab=readme-ov-file#macos[build `rpiboot` from source].
+. Connect the IO Board to power.
+. Then, run the `rpiboot` executable with the following command:
++
+[source,console]
+----
+$ rpiboot -d mass-storage-gadget64
+----
+. When the command finishes running, you should see a message stating "The disk you inserted was not readable by this computer." Click **Ignore**. Your Compute Module should now appear as a mass storage device.
-==== Writing to the eMMC (Windows)
+Windows::
++
+To set up software on a Windows 11 host device:
++
+. Download the https://github.com/raspberrypi/usbboot/raw/master/win32/rpiboot_setup.exe[Windows installer] or https://github.com/raspberrypi/usbboot[build `rpiboot` from source].
+. Double-click on the installer to run it. This installs the drivers and boot tool. Do not close any driver installation windows which appear during the installation process.
+. Reboot
+. Connect the IO Board to power. Windows should discover the hardware and configure the required drivers.
+. On CM4 and later devices, select **Raspberry Pi - Mass Storage Gadget - 64-bit** from the start menu. After a few seconds, the Compute Module eMMC or NVMe will appear as USB mass storage devices. This also provides a debug console as a serial port gadget.
+. On CM3 and older devices, select **rpiboot**. Double-click on `RPiBoot.exe` to run it. After a few seconds, the Compute Module eMMC should appear as a USB mass storage device.
-After `rpiboot` completes, a new USB mass storage drive will appear in Windows. We recommend using https://www.raspberrypi.com/software/[Raspberry Pi Imager] to write images to the drive.
+======
-Make sure J4 (USB SLAVE BOOT ENABLE) / J2 (nRPI_BOOT) is set to the disabled position and/or nothing is plugged into the USB slave port. Power cycling the IO board should now result in the Compute Module booting from eMMC.
-==== Writing to the eMMC (Linux)
+=== Flash the eMMC
-After `rpiboot` completes, you will see a new device appear; this is commonly `/dev/sda` on a Raspberry Pi but it could be another location such as `/dev/sdb`, so check in `/dev/` or run `lsblk` before running `rpiboot` so you can see what changes.
+You can use xref:../computers/getting-started.adoc#raspberry-pi-imager[Raspberry Pi Imager] to flash an operating system image to a Compute Module.
-You now need to write a raw OS image (such as https://www.raspberrypi.com/software/operating-systems/#raspberry-pi-os-32-bit[Raspberry Pi OS]) to the device. Note the following command may take some time to complete, depending on the size of the image: (Change `/dev/sdX` to the appropriate device.)
+Alternatively, use `dd` to write a raw OS image (such as xref:../computers/os.adoc#introduction[Raspberry Pi OS]) to your Compute Module. Run the following command, replacing `/dev/sdX` with the path to the mass storage device representation of your Compute Module and `raw_os_image.img` with the path to your raw OS image:
-[,bash]
+[source,console]
----
-sudo dd if=raw_os_image_of_your_choice.img of=/dev/sdX bs=4MiB
+$ sudo dd if=raw_os_image.img of=/dev/sdX bs=4MiB
----
-Once the image has been written, unplug and re-plug the USB; you should see two partitions appear (for Raspberry Pi OS) in `/dev`. In total, you should see something similar to this:
+Once the image has been written, disconnect and reconnect the Compute Module. You should now see two partitions (for Raspberry Pi OS):
-[,bash]
+[source,console]
----
/dev/sdX <- Device
/dev/sdX1 <- First partition (FAT)
/dev/sdX2 <- Second partition (Linux filesystem)
----
-The `/dev/sdX1` and `/dev/sdX2` partitions can now be mounted normally.
+You can mount the `/dev/sdX1` and `/dev/sdX2` partitions normally.
-Make sure J4 (USB SLAVE BOOT ENABLE) / J2 (nRPI_BOOT) is set to the disabled position and/or nothing is plugged into the USB slave port. Power cycling the IO board should now result in the Compute Module booting from eMMC.
+=== Boot from eMMC
-[[cm4bootloader]]
-=== Compute Module 4 Bootloader
+[tabs]
+======
+Compute Module 5 IO Board::
++
+Disconnect `nRPI_BOOT` from J2 (`disable eMMC Boot`) on the IO board jumper.
-The default bootloader configuration on CM4 is designed to support bringup and development on a https://www.raspberrypi.com/products/compute-module-4-io-board/[Compute Module 4 IO board] and the software version flashed at manufacture may be older than the latest release. For final products please consider:-
+Compute Module 4 IO Board::
++
+Disconnect `nRPI_BOOT` from J2 (`disable eMMC Boot`) on the IO board jumper.
-* Selecting and verifying a specific bootloader release. The version in the `usbboot` repo is always a recent stable release.
-* Configuring the boot device (e.g. network boot). See `BOOT_ORDER` section in the xref:raspberry-pi.adoc#raspberry-pi-bootloader-configuration[bootloader configuration] guide.
-* Enabling hardware write protection on the bootloader EEPROM to ensure that the bootloader can't be modified on remote/inaccessible products.
+Compute Module IO Board::
++
+Set J4 (`USB SLAVE BOOT ENABLE`) to 2-3 (`USB BOOT DISABLED`).
+======
-N.B. The Compute Module 4 ROM never runs `recovery.bin` from SD/EMMC and the `rpi-eeprom-update` service is not enabled by default. This is necessary because the EMMC is not removable and an invalid `recovery.bin` file would prevent the system from booting. This can be overridden and used with `self-update` mode where the bootloader can be updated from USB MSD or Network boot. However, `self-update` mode is not an atomic update and therefore not safe in the event of a power failure whilst the EEPROM was being updated.
+==== Boot
-==== Flashing NVMe / other storage devices.
-The new Linux-based https://github.com/raspberrypi/usbboot/blob/master/mass-storage-gadget/README.md[mass-storage gadget] supports flashing of NVMe, EMMC and USB block devices.
-This is normally faster than using the `rpiboot` firmware driver and also provides a UART console to the device for easier debug.
+Disconnect the USB slave port. Power-cycle the IO board to boot the Compute Module from the new image you just wrote to eMMC.
-See also: https://github.com/raspberrypi/usbboot/blob/master/Readme.md#compute-module-4-extensions[CM4 rpiboot extensions]
+=== Known issues
-==== Modifying the bootloader configuration
-
-To modify the CM4 bootloader configuration:-
-
-* cd `usbboot/recovery`
-* Replace `pieeprom.original.bin` if a specific bootloader release is required.
-* Edit the default `boot.conf` bootloader configuration file. Typically, at least the BOOT_ORDER must be updated:-
- ** For network boot `BOOT_ORDER=0xf2`
- ** For SD/EMMC boot `BOOT_ORDER=0xf1`
- ** For USB boot failing over to EMMC `BOOT_ORDER=0xf15`
-* Run `./update-pieeprom.sh` to update the EEPROM image `pieeprom.bin` image file.
-* If EEPROM write protection is required then edit `config.txt` and add `eeprom_write_protect=1`. Hardware write-protection must be enabled via software and then locked by pulling the `EEPROM_nWP` pin low.
-* Run `../rpiboot -d .` to update the bootloader using the updated EEPROM image `pieeprom.bin`
-
-The pieeprom.bin file is now ready to be flashed to the Compute Module 4.
-
-==== Flashing the bootloader EEPROM - Compute Module 4
-
-To flash the bootloader EEPROM follow the same hardware setup as for flashing the EMMC but also ensure EEPROM_nWP is NOT pulled low. Once complete `EEPROM_nWP` may be pulled low again.
-
-[,bash]
+* A small percentage of CM3 devices may experience problems booting. We have traced these back to the method used to create the FAT32 partition; we believe the problem is due to a difference in timing between the CPU and eMMC. If you have trouble booting your CM3, create the partitions manually with the following commands:
++
+[source,console]
----
-# Writes recovery/pieeprom.bin to the bootloader EEPROM.
-./rpiboot -d recovery
-----
-
-=== Troubleshooting
-
-For a small percentage of Raspberry Pi Compute Module 3s, booting problems have been reported. We have traced these back to the method used to create the FAT32 partition; we believe the problem is due to a difference in timing between the BCM2835/6/7 and the newer eMMC devices. The following method of creating the partition is a reliable solution in our hands.
-
-[,bash]
-----
-sudo parted /dev/
+$ sudo parted /dev/
(parted) mkpart primary fat32 4MiB 64MiB
(parted) q
-sudo mkfs.vfat -F32 /dev/
-sudo cp -r /*
+$ sudo mkfs.vfat -F32 /dev/
+$ sudo cp -r /*
----
+
+* The CM1 bootloader returns a slightly incorrect USB packet to the host. Most USB hosts ignore it, but some USB ports don't work due to this bug. CM3 fixed this bug.
diff --git a/documentation/asciidoc/computers/compute-module/cm-peri-sw-guide.adoc b/documentation/asciidoc/computers/compute-module/cm-peri-sw-guide.adoc
index 04af89a0d6..cb1beac887 100644
--- a/documentation/asciidoc/computers/compute-module/cm-peri-sw-guide.adoc
+++ b/documentation/asciidoc/computers/compute-module/cm-peri-sw-guide.adoc
@@ -1,48 +1,47 @@
-== Attaching and Enabling Peripherals
+== Wire peripherals
-NOTE: Unless explicitly stated otherwise, these instructions will work identically on Compute Module 1 and Compute Module 3 and their CMIO board(s).
+This guide helps developers wire up peripherals to the Compute Module pins, and explains how to enable these peripherals in software.
-This guide is designed to help developers using the Compute Module 1 (and Compute Module 3) get to grips with how to wire up peripherals to the Compute Module pins, and how to make changes to the software to enable these peripherals to work correctly.
+Most of the pins of the SoC, including the GPIO, two CSI camera interfaces, two DSI display interfaces, and HDMI are available for wiring. You can can usually leave unused pins disconnected.
-The Compute Module 1 (CM1) and Compute Module 3 (CM3) contain the Raspberry Pi BCM2835 (or BCM2837 for CM3) system on a chip (SoC) or 'processor', memory, and eMMC. The eMMC is similar to an SD card but is soldered onto the board. Unlike SD cards, the eMMC is specifically designed to be used as a disk and has extra features that make it more reliable in this use case. Most of the pins of the SoC (GPIO, two CSI camera interfaces, two DSI display interfaces, HDMI etc) are freely available and can be wired up as the user sees fit (or, if unused, can usually be left unconnected). The Compute Module is a DDR2 SODIMM form-factor-compatible module, so any DDR2 SODIMM socket should be able to be used
+Compute Modules that come in the DDR2 SODIMM form factor are physically compatible with any DDR2 SODIMM socket. However, the pinout is **not** the same as SODIMM memory modules.
-NOTE: The pinout is NOT the same as an actual SODIMM memory module.
+To use a Compute Module, a user must design a motherboard that:
-To use the Compute Module, a user needs to design a (relatively simple) 'motherboard' which can provide power to the Compute Module (3.3V and 1.8V at minimum), and which connects the pins to the required peripherals for the user's application.
+* provides power to the Compute Module (3.3V and 1.8V at minimum)
+* connects the pins to the required peripherals for the user's application
-Raspberry Pi provides a minimal motherboard for the Compute Module (called the Compute Module IO Board, or CMIO Board) which powers the module, brings out the GPIO to pin headers, and brings the camera and display interfaces out to FFC connectors. It also provides HDMI, USB, and an 'ACT' LED, as well as the ability to program the eMMC of a module via USB from a PC or Raspberry Pi.
+This guide first explains the boot process and how Device Tree describes attached hardware.
-This guide first explains the boot process and how Device Tree is used to describe attached hardware; these are essential things to understand when designing with the Compute Module. It then provides a worked example of attaching an I2C and an SPI peripheral to a CMIO (or CMIO V3 for CM3) Board and creating the Device Tree files necessary to make both peripherals work under Linux, starting from a vanilla Raspberry Pi OS image.
+Then, we'll explain how to attach an I2C and an SPI peripheral to an IO Board. Finally, we'll create the Device Tree files necessary to use both peripherals with Raspberry Pi OS.
=== BCM283x GPIOs
-BCM283x has three banks of General-Purpose Input/Output (GPIO) pins: 28 pins on Bank 0, 18 pins on Bank 1, and 8 pins on Bank 2, making 54 pins in total. These pins can be used as true GPIO pins, i.e. software can set them as inputs or outputs, read and/or set state, and use them as interrupts. They also can be set to 'alternate functions' such as I2C, SPI, I2S, UART, SD card, and others.
+BCM283x has three banks of general-purpose input/output (GPIO) pins: 28 pins on Bank 0, 18 pins on Bank 1, and 8 pins on Bank 2, for a total of 54 pins. These pins can be used as true GPIO pins: software can set them as inputs or outputs, read and/or set state, and use them as interrupts. They also can run alternate functions such as I2C, SPI, I2S, UART, SD card, and others.
-On a Compute Module, both Bank 0 and Bank 1 are free to use. Bank 2 is used for eMMC and HDMI hot plug detect and ACT LED / USB boot control.
+You can use Bank 0 or Bank 1 on any Compute Module. Don't use Bank 2: it controls eMMC, HDMI hot plug detect, and ACT LED/USB boot control.
-It is useful on a running system to look at the state of each of the GPIO pins (what function they are set to, and the voltage level at the pin) so that you can see if the system is set up as expected. This is particularly helpful if you want to see if a Device Tree is working as expected, or to get a look at the pin states during hardware debug.
+Use `pinctrl` to check the voltage and function of the GPIO pins to see if your Device Tree is working as expected.
-Raspberry Pi provides the `pinctrl` package which is a tool for hacking and debugging GPIO.
+=== BCM283x boot process
-=== BCM283x Boot Process
+BCM283x devices have a VideoCore GPU and Arm CPU cores. The GPU consists of a DSP processor and hardware accelerators for imaging, video encode and decode, 3D graphics, and image compositing.
-BCM283x devices consist of a VideoCore GPU and ARM CPU cores. The GPU is in fact a system consisting of a DSP processor and hardware accelerators for imaging, video encode and decode, 3D graphics, and image compositing.
+In BCM283x devices, the DSP core in the GPU boots first. It handles setup before booting up the main Arm processors.
-In BCM283x devices, it is the DSP core in the GPU that boots first. It is responsible for general setup and housekeeping before booting up the main ARM processor(s).
+Raspberry Pi BCM283x devices have a three-stage boot process:
-The BCM283x devices as used on Raspberry Pi and Compute Module boards have a three-stage boot process:
-
-. The GPU DSP comes out of reset and executes code from a small internal ROM (the boot ROM). The sole purpose of this code is to load a second stage boot loader via one of the external interfaces. On a Raspberry Pi or Compute Module, this code first looks for a second stage boot loader on the SD card (eMMC); it expects this to be called `bootcode.bin` and to be on the first partition (which must be FAT32). If no SD card is found or `bootcode.bin` is not found, the Boot ROM sits and waits in 'USB boot' mode, waiting for a host to give it a second stage boot loader via the USB interface.
-. The second stage boot loader (`bootcode.bin` on the sdcard or `usbbootcode.bin` for usb boot) is responsible for setting up the LPDDR2 SDRAM interface and various other critical system functions and then loading and executing the main GPU firmware (called `start.elf`, again on the primary SD card partition).
-. `start.elf` takes over and is responsible for further system setup and booting up the ARM processor subsystem, and contains the firmware that runs on the various parts of the GPU. It first reads `dt-blob.bin` to determine initial GPIO pin states and GPU-specific interfaces and clocks, then parses `config.txt`. It then loads an ARM device tree file (e.g. `bcm2708-rpi-cm.dtb` for a Compute Module 1) and any device tree overlays specified in `config.txt` before starting the ARM subsystem and passing the device tree data to the booting Linux kernel.
+* The GPU DSP comes out of reset and executes code from the small internal boot ROM. This code loads a second-stage bootloader via an external interface. This code first looks for a second-stage boot loader on the boot device called `bootcode.bin` on the boot partition. If no boot device is found or `bootcode.bin` is not found, the boot ROM waits in USB boot mode for a host to provide a second-stage boot loader (`usbbootcode.bin`).
+* The second-stage boot loader is responsible for setting up the LPDDR2 SDRAM interface and other critical system functions. Once set up, the second-stage boot loader loads and executes the main GPU firmware (`start.elf`).
+* `start.elf` handles additional system setup and boots up the Arm processor subsystem. It contains the GPU firmware. The GPU firmware first reads `dt-blob.bin` to determine initial GPIO pin states and GPU-specific interfaces and clocks, then parses `config.txt`. It then loads a model-specific Arm device tree file and any Device Tree overlays specified in `config.txt` before starting the Arm subsystem and passing the Device Tree data to the booting Linux kernel.
=== Device Tree
-http://www.devicetree.org/[Device Tree] is a special way of encoding all the information about the hardware attached to a system (and consequently required drivers).
+xref:configuration.adoc#device-trees-overlays-and-parameters[Linux Device Tree for Raspberry Pi] encodes information about hardware attached to a system as well as the drivers used to communicate with that hardware.
-On a Raspberry Pi or Compute Module there are several files in the first FAT partition of the SD/eMMC that are binary 'Device Tree' files. These binary files (usually with extension `.dtb`) are compiled from human-readable text descriptions (usually files with extension `.dts`) by the Device Tree compiler.
+The boot partition contains several binary Device Tree (`.dtb`) files. The Device Tree compiler creates these binary files using human-readable Device Tree descriptions (`.dts`).
-On a standard Raspberry Pi OS image in the first (FAT) partition you will find two different types of device tree files, one is used by the GPU only and the rest are standard ARM device tree files for each of the BCM283x based Raspberry Pi products:
+The boot partition contains two different types of Device Tree files. One is used by the GPU only; the rest are standard Arm Device Tree files for each of the BCM283x-based Raspberry Pi products:
* `dt-blob.bin` (used by the GPU)
* `bcm2708-rpi-b.dtb` (Used for Raspberry Pi 1 Models A and B)
@@ -52,180 +51,185 @@ On a standard Raspberry Pi OS image in the first (FAT) partition you will find t
* `bcm2708-rpi-cm.dtb` (Used for Raspberry Pi Compute Module 1)
* `bcm2710-rpi-cm3.dtb` (Used for Raspberry Pi Compute Module 3)
-NOTE: `dt-blob.bin` by default does not exist as there is a 'default' version compiled into `start.elf`, but for Compute Module projects it will often be necessary to provide a `dt-blob.bin` (which overrides the default built-in file).
+During boot, the user can specify a specific Arm Device Tree to use via the `device_tree` parameter in `config.txt`. For example, the line `device_tree=mydt.dtb` in `config.txt` specifies an Arm Device Tree in a file named `mydt.dtb`.
+
+You can create a full Device Tree for a Compute Module product, but we recommend using **overlays** instead. Overlays add descriptions of non-board-specific hardware to the base Device Tree. This includes GPIO pins used and their function, as well as the devices attached, so that the correct drivers can be loaded. The bootloader merges overlays with the base Device Tree before passing the Device Tree to the Linux kernel. Occasionally the base Device Tree changes, usually in a way that will not break overlays.
+
+Use the `dtoverlay` parameter in `config.txt` to load Device Tree overlays. Raspberry Pi OS assumes that all overlays are located in the `/overlays` directory and use the suffix `-overlay.dtb`. For example, the line `dtoverlay=myoverlay` loads the overlay `/overlays/myoverlay-overlay.dtb`.
-NOTE: `dt-blob.bin` is in compiled device tree format, but is only read by the GPU firmware to set up functions exclusive to the GPU - see below.
+To wire peripherals to a Compute Module, describe all hardware attached to the Bank 0 and Bank 1 GPIOs in an overlay. This allows you to use standard Raspberry Pi OS images, since the overlay is merged into the standard base Device Tree. Alternatively, you can define a custom Device Tree for your application, but you won't be able to use standard Raspberry Pi OS images. Instead, you must create a modified Raspberry Pi OS image that includes your custom device tree for every OS update you wish to distribute. If the base overlay changes, you might need to update your customised Device Tree.
-* A guide to xref:configuration.adoc#change-the-default-pin-configuration[creating `dt-blob.bin`].
-* A guide to the xref:configuration.adoc#device-trees-overlays-and-parameters[Linux Device Tree for Raspberry Pi].
+=== `dt-blob.bin`
-During boot, the user can specify a specific ARM device tree to use via the `device_tree` parameter in `config.txt`, for example adding the line `device_tree=mydt.dtb` to `config.txt` where `mydt.dtb` is the dtb file to load instead of one of the standard ARM dtb files. While a user can create a full device tree for their Compute Module product, the recommended way to add hardware is to use overlays (see next section).
+When `start.elf` runs, it first reads `dt-blob.bin`. This is a special form of Device Tree blob which tells the GPU how to set up the GPIO pin states.
-In addition to loading an ARM dtb, `start.elf` supports loading additional Device Tree 'overlays' via the `dtoverlay` parameter in `config.txt`, for example adding as many `dtoverlay=myoverlay` lines as required as overlays to `config.txt`, noting that overlays live in `/overlays` and are suffixed `-overlay.dtb` e.g. `/overlays/myoverlay-overlay.dtb`. Overlays are merged with the base dtb file before the data is passed to the Linux kernel when it starts.
+`dt-blob.bin` contains information about GPIOs and peripherals controlled by the GPU, instead of the SoC. For example, the GPU manages Camera Modules. The GPU needs exclusive access to an I2C interface and a couple of pins to talk to a Camera Module.
-Overlays are used to add data to the base dtb that (nominally) describes non-board-specific hardware. This includes GPIO pins used and their function, as well as the device(s) attached, so that the correct drivers can be loaded. The convention is that on a Raspberry Pi, all hardware attached to the Bank0 GPIOs (the GPIO header) should be described using an overlay. On a Compute Module all hardware attached to the Bank0 and Bank1 GPIOs should be described in an overlay file. You don't have to follow these conventions: you can roll all the information into one single dtb file, as previously described, replacing `bcm2708-rpi-cm.dtb`. However, following the conventions means that you can use a 'standard' Raspberry Pi OS release, with its standard base dtb and all the product-specific information contained in a separate overlay. Occasionally the base dtb might change - usually in a way that will not break overlays - which is why using an overlay is suggested.
+On most Raspberry Pi models, I2C0 is reserved for exclusive GPU use. `dt-blob.bin` defines the GPIO pins used for I2C0.
-=== dt-blob.bin
+By default, `dt-blob.bin` does not exist. Instead, `start.elf` includes a built-in version of the file. Many Compute Module projects provide a custom `dt-blob.bin` which overrides the default built-in file.
-When `start.elf` runs, it first reads something called `dt-blob.bin`. This is a special form of Device Tree blob which tells the GPU how to (initially) set up the GPIO pin states, and also any information about GPIOs/peripherals that are controlled (owned) by the GPU, rather than being used via Linux on the ARM. For example, the Raspberry Pi Camera peripheral is managed by the GPU, and the GPU needs exclusive access to an I2C interface to talk to it, as well as a couple of control pins. I2C0 on most Raspberry Pi Boards and Compute Modules is nominally reserved for exclusive GPU use. The information on which GPIO pins the GPU should use for I2C0, and to control the camera functions, comes from `dt-blob.bin`.
+`dt-blob.bin` specifies:
-NOTE: The `start.elf` firmware has a xref:configuration.adoc#change-the-default-pin-configuration['built-in' default] `dt-blob.bin` which is used if no `dt-blob.bin` is found on the root of the first FAT partition. Most Compute Module projects will want to provide their own custom `dt-blob.bin`. Note that `dt-blob.bin` specifies which pin is for HDMI hot plug detect, although this should never change on Compute Module. It can also be used to set up a GPIO as a GPCLK output, and specify an ACT LED that the GPU can use while booting. Other functions may be added in future.
+* the pin used for HDMI hot plug detect
+* GPIO pins used as a GPCLK output
+* an ACT LED that the GPU can use while booting
-https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[minimal-cm-dt-blob.dts] is an example `.dts` device tree file that sets up the HDMI hot plug detect and ACT LED and sets all other GPIOs to be inputs with default pulls.
+https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[`minimal-cm-dt-blob.dts`] is an example `.dts` device tree file. It sets up HDMI hot plug detection, an ACT LED, and sets all other GPIOs as inputs with default pulls.
-To compile the `minimal-cm-dt-blob.dts` to `dt-blob.bin` use the Device Tree Compiler `dtc`:
+To compile `minimal-cm-dt-blob.dts` to `dt-blob.bin`, use the xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree compiler] `dtc`.
+To install `dtc` on a Raspberry Pi, run the following command:
+[source,console]
----
-dtc -I dts -O dtb -o dt-blob.bin minimal-cm-dt-blob.dts
+$ sudo apt install device-tree-compiler
----
-=== ARM Linux Device Tree
+Then, run the follow command to compile `minimal-cm-dt-blob.dts` into `dt-blob.bin`:
-After `start.elf` has read `dt-blob.bin` and set up the initial pin states and clocks, it reads xref:config_txt.adoc[`config.txt`] which contains many other options for system setup.
+[source,console]
+----
+$ dtc -I dts -O dtb -o dt-blob.bin minimal-cm-dt-blob.dts
+----
-After reading `config.txt` another device tree file specific to the board the hardware is running on is read: this is `bcm2708-rpi-cm.dtb` for a Compute Module 1, or `bcm2710-rpi-cm.dtb` for Compute Module 3. This file is a standard ARM Linux device tree file, which details how hardware is attached to the processor: what peripheral devices exist in the SoC and where, which GPIOs are used, what functions those GPIOs have, and what physical devices are connected. This file will set up the GPIOs appropriately, overwriting the pin state set up in `dt-blob.bin` if it is different. It will also try to load driver(s) for the specific device(s).
+For more information, see our xref:configuration.adoc#change-the-default-pin-configuration[guide to creating `dt-blob.bin`].
-Although the `bcm2708-rpi-cm.dtb` file can be used to load all attached devices, the recommendation for Compute Module users is to leave this file alone. Instead, use the one supplied in the standard Raspberry Pi OS software image, and add devices using a custom 'overlay' file as previously described. The `bcm2708-rpi-cm.dtb` file contains (disabled) entries for the various peripherals (I2C, SPI, I2S etc.) and no GPIO pin definitions, apart from the eMMC/SD Card peripheral which has GPIO defs and is enabled, because it is always on the same pins. The idea is that the separate overlay file will enable the required interfaces, describe the pins used, and also describe the required drivers. The `start.elf` firmware will read and merge the `bcm2708-rpi-cm.dtb` with the overlay data before giving the merged device tree to the Linux kernel as it boots up.
+=== Arm Linux Device Tree
-=== Device Tree Source and Compilation
+After `start.elf` reads `dt-blob.bin` and sets up the initial pin states and clocks, it reads xref:config_txt.adoc[`config.txt`], which contains many other options for system setup.
-The Raspberry Pi OS image provides compiled dtb files, but where are the source dts files? They live in the Raspberry Pi Linux kernel branch, on https://github.com/raspberrypi/linux[GitHub]. Look in the `arch/arm/boot/dts` folder.
+After reading `config.txt`, `start.elf` reads a model-specific Device Tree file. For instance, Compute Module 3 uses `bcm2710-rpi-cm.dtb`. This file is a standard Arm Linux Device Tree file that details hardware attached to the processor. It enumerates:
-Some default overlay dts files live in `arch/arm/boot/dts/overlays`. Corresponding overlays for standard hardware that can be attached to a *Raspberry Pi* in the Raspberry Pi OS image are on the FAT partition in the `/overlays` directory. Note that these assume certain pins on BANK0, as they are for use on a Raspberry Pi. In general, use the source of these standard overlays as a guide to creating your own, unless you are using the same GPIO pins as you would be using if the hardware was plugged into the GPIO header of a Raspberry Pi.
+* what and where peripheral devices exist
+* which GPIOs are used
+* what functions those GPIOs have
+* what physical devices are connected
-Compiling these dts files to dtb files requires an up-to-date version of the xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree compiler] `dtc`. The way to install an appropriate version on Raspberry Pi is to run:
+This file sets up the GPIOs by overwriting the pin state in `dt-blob.bin` if it is different. It will also try to load drivers for the specific devices.
-----
-sudo apt install device-tree-compiler
-----
+The model-specific Device Tree file contains disabled entries for peripherals. It contains no GPIO pin definitions other than the eMMC/SD Card peripheral which has GPIO defs and always uses the same pins.
-If you are building your own kernel then the build host also gets a version in `scripts/dtc`. You can arrange for your overlays to be built automatically by adding them to `Makefile` in `arch/arm/boot/dts/overlays`, and using the 'dtbs' make target.
+=== Device Tree source and compilation
-=== Device Tree Debugging
+The Raspberry Pi OS image provides compiled `dtb` files, but the source `dts` files live in the https://github.com/raspberrypi/linux/tree/rpi-6.6.y/arch/arm/boot/dts/broadcom[Raspberry Pi Linux kernel branch]. Look for `rpi` in the file names.
-When the Linux kernel is booted on the ARM core(s), the GPU provides it with a fully assembled device tree, assembled from the base dts and any overlays. This full tree is available via the Linux proc interface in `/proc/device-tree`, where nodes become directories and properties become files.
+Default overlay `dts` files live at https://github.com/raspberrypi/linux/tree/rpi-6.6.y/arch/arm/boot/dts/overlays[`arch/arm/boot/dts/overlays`]. These overlay files are a good starting point for creating your own overlays. To compile these `dts` files to `dtb` files, use the xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree compiler] `dtc`.
-You can use `dtc` to write this out as a human readable dts file for debugging. You can see the fully assembled device tree, which is often very useful:
+When building your own kernel, the build host requires the Device Tree compiler in `scripts/dtc`. To build your overlays automatically, add them to the `dtbs` make target in `arch/arm/boot/dts/overlays/Makefile`.
-----
-dtc -I fs -O dts -o proc-dt.dts /proc/device-tree
-----
+=== Device Tree debugging
-As previously explained in the GPIO section, it is also very useful to use `pinctrl` to look at the setup of the GPIO pins to check that they are as you expect. If something seems to be going awry, useful information can also be found by dumping the GPU log messages:
+When booting the Linux kernel, the GPU provides a fully assembled Device Tree created using the base `dts` and any overlays. This full tree is available via the Linux `proc` interface in `/proc/device-tree`. Nodes become directories and properties become files.
+You can use `dtc` to write this out as a human readable `dts` file for debugging. To see the fully assembled device tree, run the following command:
+
+[source,console]
----
-sudo vclog --msg
+$ dtc -I fs -O dts -o proc-dt.dts /proc/device-tree
----
-You can include more diagnostics in the output by adding `dtdebug=1` to `config.txt`.
-
-=== Examples
+`pinctrl` provides the status of the GPIO pins. If something seems to be going awry, try dumping the GPU log messages:
-NOTE: Please use the https://forums.raspberrypi.com/viewforum.php?f=107[Device Tree subforum] on the Raspberry Pi forums to ask Device Tree related questions.
+[source,console]
+----
+$ sudo vclog --msg
+----
-For these simple examples I used a CMIO board with peripherals attached via jumper wires.
+TIP: To include even more diagnostics in the output, add `dtdebug=1` to `config.txt`.
-For each of the examples we assume a CM1+CMIO or CM3+CMIO3 board with a clean install of the latest Raspberry Pi OS Lite version on the Compute Module.
+Use the https://forums.raspberrypi.com/viewforum.php?f=107[Device Tree Raspberry Pi forum] to ask Device Tree-related questions or report an issue.
-The examples here require internet connectivity, so a USB hub plus keyboard plus wireless LAN or Ethernet dongle plugged into the CMIO USB port is recommended.
+=== Examples
-Please post any issues, bugs or questions on the Raspberry Pi https://forums.raspberrypi.com/viewforum.php?f=107[Device Tree subforum].
+The following examples use an IO Board with peripherals attached via jumper wires. We assume a CM1+CMIO or CM3+CMIO3, running a clean install of Raspberry Pi OS Lite. The examples here require internet connectivity, so we recommend a USB hub, keyboard, and wireless LAN or Ethernet dongle plugged into the IO Board USB port.
-[discrete]
-=== Example 1 - attaching an I2C RTC to BANK1 pins
+==== Attach an I2C RTC to Bank 1 pins
-In this simple example we wire an NXP PCF8523 real time clock (RTC) to the CMIO board BANK1 GPIO pins: 3V3, GND, I2C1_SDA on GPIO44 and I2C1_SCL on GPIO45.
+In this example, we wire an NXP PCF8523 real time clock (RTC) to the IO Board Bank 1 GPIO pins: 3V3, GND, I2C1_SDA on GPIO44 and I2C1_SCL on GPIO45.
-Download https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[minimal-cm-dt-blob.dts] and copy it to the SD card FAT partition, located in `/boot/firmware/` when the Compute Module has booted.
+Download https://datasheets.raspberrypi.com/cm/minimal-cm-dt-blob.dts[`minimal-cm-dt-blob.dts`] and copy it to the boot partition in `/boot/firmware/`.
Edit `minimal-cm-dt-blob.dts` and change the pin states of GPIO44 and 45 to be I2C1 with pull-ups:
+[source,console]
----
-sudo nano /boot/firmware/minimal-cm-dt-blob.dts
+$ sudo nano /boot/firmware/minimal-cm-dt-blob.dts
----
-Change lines:
+Replace the following lines:
+[source,kotlin]
----
pin@p44 { function = "input"; termination = "pull_down"; }; // DEFAULT STATE WAS INPUT NO PULL
pin@p45 { function = "input"; termination = "pull_down"; }; // DEFAULT STATE WAS INPUT NO PULL
----
-to:
+With the following pull-up definitions:
+[source,kotlin]
----
pin@p44 { function = "i2c1"; termination = "pull_up"; }; // SDA1
pin@p45 { function = "i2c1"; termination = "pull_up"; }; // SCL1
----
-NOTE: We could use this `dt-blob.dts` with no changes The Linux Device Tree will (re)configure these pins during Linux kernel boot when the specific drivers are loaded, so it is up to you whether you modify `dt-blob.dts`. I like to configure `dt-blob.dts` to what I expect the final GPIOs to be, as they are then set to their final state as soon as possible during the GPU boot stage, but this is not strictly necessary. You may find that in some cases you do need pins to be configured at GPU boot time, so they are in a specific state when Linux drivers are loaded. For example, a reset line may need to be held in the correct orientation.
+We could use this `dt-blob.dts` with no changes, because the Linux Device Tree re-configures these pins during Linux kernel boot when the specific drivers load. However, if you configure `dt-blob.dts`, the GPIOs reach their final state as soon as possible during the GPU boot stage. In some cases, pins must be configured at GPU boot time so they are in a specific state when Linux drivers are loaded. For example, a reset line may need to be held in the correct orientation.
-Compile `dt-blob.bin`:
+Run the following command to compile `dt-blob.bin`:
+[source,console]
----
-sudo dtc -I dts -O dtb -o /boot/firmware/dt-blob.bin /boot/firmware/minimal-cm-dt-blob.dts
+$ sudo dtc -I dts -O dtb -o /boot/firmware/dt-blob.bin /boot/firmware/minimal-cm-dt-blob.dts
----
-Grab https://datasheets.raspberrypi.com/cm/example1-overlay.dts[example1-overlay.dts], put it in `/boot/firmware/`, then compile it:
+Download https://datasheets.raspberrypi.com/cm/example1-overlay.dts[`example1-overlay.dts`], copy it to the boot partition in `/boot/firmware/`, then compile it with the following command:
+[source,console]
----
-sudo dtc -@ -I dts -O dtb -o /boot/firmware/overlays/example1.dtbo /boot/firmware/example1-overlay.dts
+$ sudo dtc -@ -I dts -O dtb -o /boot/firmware/overlays/example1.dtbo /boot/firmware/example1-overlay.dts
----
-NOTE: The '-@' in the `dtc` command line. This is necessary if you are compiling dts files with external references, as overlays tend to be.
+The `-@` flag compiles `dts` files with external references. It is usually necessary.
-Edit xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] and add the line:
+Add the following line to xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]:
+[source,ini]
----
dtoverlay=example1
----
-Now save and reboot.
+Finally, reboot with `sudo reboot`.
-Once rebooted, you should see an rtc0 entry in /dev. Running:
+Once rebooted, you should see an `rtc0` entry in `/dev`. Run the following command to view the hardware clock time:
+[source,console]
----
-sudo hwclock
+$ sudo hwclock
----
-will return with the hardware clock time, and not an error.
+==== Attach an ENC28J60 SPI Ethernet controller on Bank 0
-[discrete]
-=== Example 2 - Attaching an ENC28J60 SPI Ethernet Controller on BANK0
+In this example, we use an overlay already defined in `/boot/firmware/overlays` to add an ENC28J60 SPI Ethernet controller to Bank 0. The Ethernet controller uses SPI pins CE0, MISO, MOSI and SCLK (GPIO8-11 respectively), GPIO25 for a falling edge interrupt, in addition to GND and 3.3V.
-In this example we use one of the already available overlays in `/boot/firmware/overlays` to add an ENC28J60 SPI Ethernet controller to BANK0. The Ethernet controller is connected to SPI pins CE0, MISO, MOSI and SCLK (GPIO8-11 respectively), as well as GPIO25 for a falling edge interrupt, and of course GND and 3V3.
-
-In this example we won't change `dt-blob.bin`, although of course you can if you wish. We should see that Linux Device Tree correctly sets up the pins.
-
-Edit `/boot/firmware/config.txt` and add the following line:
+In this example, we won't change `dt-blob.bin`. Instead, add the following line to `/boot/firmware/config.txt`:
+[source,ini]
----
dtoverlay=enc28j60
----
-Now save and reboot.
-
-Once rebooted you should see, as before, an rtc0 entry in /dev. Running:
-
-----
-sudo hwclock
-----
-
-will return with the hardware clock time, and not an error.
+Reboot with `sudo reboot`.
-You should also have Ethernet connectivity:
+If you now run `ifconfig` you should see an aditional `eth` entry for the ENC28J60 NIC. You should also have Ethernet connectivity. Run the following command to test your connectivity:
+[source,console]
----
-ping 8.8.8.8
+$ ping 8.8.8.8
----
-should work.
-
-finally running:
+Run the following command to show GPIO functions; GPIO8-11 should now provide ALT0 (SPI) functions:
+[source,console]
----
-pinctrl
+$ pinctrl
----
-should show that GPIO8-11 have changed to ALT0 (SPI) functions.
-
diff --git a/documentation/asciidoc/computers/compute-module/cmio-camera.adoc b/documentation/asciidoc/computers/compute-module/cmio-camera.adoc
index 2e5352ec2b..a29dbbd82b 100644
--- a/documentation/asciidoc/computers/compute-module/cmio-camera.adoc
+++ b/documentation/asciidoc/computers/compute-module/cmio-camera.adoc
@@ -1,26 +1,25 @@
-== Attach a Raspberry Pi Camera Module
+== Attach a Camera Module
The Compute Module has two CSI-2 camera interfaces: CAM1 and CAM0. This section explains how to connect one or two Raspberry Pi Cameras to a Compute Module using the CAM1 and CAM0 interfaces with a Compute Module I/O Board.
-IMPORTANT: Camera modules are not hot-pluggable. *Always* power down your board before connecting or disconnecting a camera module.
-
=== Update your system
-Before configuring a camera, ensure your system runs the latest available software:
+Before configuring a camera, xref:../computers/raspberry-pi.adoc#update-the-bootloader-configuration[ensure that your Raspberry Pi firmware is up-to-date].:
+[source,console]
----
-sudo apt update
-sudo apt full-upgrade
+$ sudo apt update
+$ sudo apt full-upgrade
----
=== Connect one camera
To connect a single camera to a Compute Module, complete the following steps:
-. Power the Compute Module down.
+. Disconnect the Compute Module from power.
. Connect the Camera Module to the CAM1 port using a RPI-CAMERA board or a Raspberry Pi Zero camera cable.
+
-image::images/CMIO-Cam-Adapter.jpg[Connecting the adapter board]
+image::images/CMIO-Cam-Adapter.jpg[alt="Connecting the adapter board", width="60%"]
. _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables:
* `0` to `CD1_SDA`
@@ -28,31 +27,35 @@ image::images/CMIO-Cam-Adapter.jpg[Connecting the adapter board]
* `2` to `CAM1_I01`
* `3` to `CAM1_I00`
+
-image::images/CMIO-Cam-GPIO.jpg[GPIO connection for a single camera]
+image::images/CMIO-Cam-GPIO.jpg[alt="GPIO connection for a single camera", width="60%"]
+. Reconnect the Compute Module to power.
. Remove (or comment out with the prefix `#`) the following lines, if they exist, in `/boot/firmware/config.txt`:
+
+[source,ini]
----
camera_auto_detect=1
----
+
+[source,ini]
----
dtparam=i2c_arm=on
----
-NOTE: If your Compute Module includes onboard EMMC storage, you can boot, edit the boot configuration, then reboot to load the configuration changes.
. _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to accommodate the swapped GPIO pin assignment on the I/O board:
+
+[source,ini]
----
dtoverlay=cm-swap-i2c0
----
. _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to assign GPIO 3 as the CAM1 regulator:
+
+[source,ini]
----
dtparam=cam1_reg
-----
+----
. Add the appropriate directive to `/boot/firmware/config.txt` to manually configure the driver for your camera model:
+
@@ -61,28 +64,29 @@ dtparam=cam1_reg
| camera model
| directive
-| v1 camera
-| `dtoverlay=ov5647,cam1`
+| v1 camera
+| `dtoverlay=ov5647`
| v2 camera
-| `dtoverlay=imx219,cam1`
+| `dtoverlay=imx219`
| v3 camera
-| `dtoverlay=imx708,cam1`
+| `dtoverlay=imx708`
| HQ camera
-| `dtoverlay=imx477,cam1`
+| `dtoverlay=imx477`
| GS camera
-| `dtoverlay=imx296,cam1`
+| `dtoverlay=imx296`
|===
-. Power the Compute Module on.
+. Reboot your Compute Module with `sudo reboot`.
. Run the following command to check the list of detected cameras:
+
+[source,console]
----
-rpicam-hello --list
+$ rpicam-hello --list
----
You should see your camera model, referred to by the driver directive in the table above, in the output.
@@ -90,29 +94,31 @@ You should see your camera model, referred to by the driver directive in the tab
To connect two cameras to a Compute Module, complete the following steps:
-. Follow the single camera quickstart above.
-. Power the Compute Module down.
+. Follow the single camera instructions above.
+. Disconnect the Compute Module from power.
. Connect the Camera Module to the CAM0 port using a RPI-CAMERA board or a Raspberry Pi Zero camera cable.
+
-image::images/CMIO-Cam-Adapter.jpg[Connect the adapter board]
+image::images/CMIO-Cam-Adapter.jpg[alt="Connect the adapter board", width="60%"]
. _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables:
* `28` to `CD0_SDA`
* `29` to `CD0_SCL`
* `30` to `CAM0_I01`
* `31` to `CAM0_I00`
+
-image:images/CMIO-Cam-GPIO2.jpg[GPIO connection with additional camera]
+image:images/CMIO-Cam-GPIO2.jpg[alt="GPIO connection with additional camera", width="60%"]
-. _(CM4 only)_: Connect the J6 GPIO pins with two vertical-orientation jumpers.
+. _(CM4 and CM5)_: Connect the J6 GPIO pins with two vertical-orientation jumpers.
+
-image:images/j6_vertical.jpg[Connect the J6 GPIO pins in vertical orientation]
+image:images/j6_vertical.jpg[alt="Connect the J6 GPIO pins in vertical orientation", width="60%"]
+
+. Reconnect the Compute Module to power.
. _(CM1, CM3, CM3+, and CM4S only)_: Add the following directive to `/boot/firmware/config.txt` to assign GPIO 31 as the CAM0 regulator:
+
+[source,ini]
----
dtparam=cam0_reg
----
-NOTE: If your Compute Module includes onboard EMMC storage, you can boot, edit the boot configuration, then reboot to load the configuration changes.
. Add the appropriate directive to `/boot/firmware/config.txt` to manually configure the driver for your camera model:
+
@@ -121,7 +127,7 @@ NOTE: If your Compute Module includes onboard EMMC storage, you can boot, edit t
| camera model
| directive
-| v1 camera
+| v1 camera
| `dtoverlay=ov5647,cam0`
| v2 camera
@@ -137,17 +143,17 @@ NOTE: If your Compute Module includes onboard EMMC storage, you can boot, edit t
| `dtoverlay=imx296,cam0`
|===
-. Power the Compute Module on.
+. Reboot your Compute Module with `sudo reboot`.
. Run the following command to check the list of detected cameras:
+
+[source,console]
----
-rpicam-hello --list
+$ rpicam-hello --list
----
+
You should see both camera models, referred to by the driver directives in the table above, in the output.
-
=== Software
Raspberry Pi OS includes the `libcamera` library to help you take images with your Raspberry Pi.
@@ -156,8 +162,9 @@ Raspberry Pi OS includes the `libcamera` library to help you take images with yo
Use the following command to immediately take a picture and save it to a file in PNG encoding using the `MMDDhhmmss` date format as a filename:
+[source,console]
----
-rpicam-still --datetime -e png
+$ rpicam-still --datetime -e png
----
Use the `-t` option to add a delay in milliseconds.
@@ -165,10 +172,11 @@ Use the `--width` and `--height` options to specify a width and height for the i
==== Take a video
-Use the following command to immediately start recording a 10 second long video and save it to a file with the h264 codec named `video.h264`:
+Use the following command to immediately start recording a ten-second long video and save it to a file with the h264 codec named `video.h264`:
+[source,console]
----
-rpicam-vid -t 10000 -o video.h264
+$ rpicam-vid -t 10000 -o video.h264
----
==== Specify which camera to use
@@ -176,6 +184,7 @@ rpicam-vid -t 10000 -o video.h264
By default, `libcamera` always uses the camera with index `0` in the `--list-cameras` list.
To specify a camera option, get an index value for each camera from the following command:
+[source,console]
----
$ rpicam-hello --list-cameras
Available cameras
@@ -199,14 +208,16 @@ In the above output:
To use the HQ camera, pass its index (`0`) to the `--camera` `libcamera` option:
+[source,console]
----
-rpicam-hello --camera 0
+$ rpicam-hello --camera 0
----
To use the v3 camera, pass its index (`1`) to the `--camera` `libcamera` option:
+[source,console]
----
-rpicam-hello --camera 1
+$ rpicam-hello --camera 1
----
@@ -231,6 +242,7 @@ By default, the supplied camera drivers assume that CAM1 uses `i2c-10` and CAM0
To connect a camera to the CM1, CM3, CM3+ and CM4S I/O Board, add the following directive to `/boot/firmware/config.txt` to accommodate the swapped pin assignment:
+[source,ini]
----
dtoverlay=cm-swap-i2c0
----
@@ -265,9 +277,9 @@ Alternative boards may use other pin assignments. Check the documentation for yo
For camera shutdown, Device Tree uses the pins assigned by the `cam1_reg` and `cam0_reg` overlays.
-The CM4 IO Board provides a single GPIO pin for both aliases, so both cameras share the same regulator.
+The CM4 IO board provides a single GPIO pin for both aliases, so both cameras share the same regulator.
-The CM1, CM3, CM3+, and CM4S I/O Board provides no GPIO pin for `cam1_reg` and `cam0_reg`, so the regulators are disabled on those boards. However, you can enable them with the following directives in `/boot/firmware/config.txt`:
+The CM1, CM3, CM3+, and CM4S I/O boards provides no GPIO pin for `cam1_reg` and `cam0_reg`, so the regulators are disabled on those boards. However, you can enable them with the following directives in `/boot/firmware/config.txt`:
* `dtparam=cam1_reg`
* `dtparam=cam0_reg`
diff --git a/documentation/asciidoc/computers/compute-module/cmio-display.adoc b/documentation/asciidoc/computers/compute-module/cmio-display.adoc
index 421662eedd..747eb41bf2 100644
--- a/documentation/asciidoc/computers/compute-module/cmio-display.adoc
+++ b/documentation/asciidoc/computers/compute-module/cmio-display.adoc
@@ -1,125 +1,83 @@
-== Attaching the Official 7-inch Display
+== Attaching the Touch Display LCD panel
-NOTE: These instructions are intended for advanced users, if anything is unclear please use the https://forums.raspberrypi.com/viewforum.php?f=98[Raspberry Pi Compute Module forums] for technical help.
+Update your system software and firmware to the latest version before starting.
+Compute Modules mostly use the same process, but sometimes physical differences force changes for a particular model.
-Please ensure your system software is updated before starting. Largely speaking the approach taken for Compute Modules 1, 3, and 4 is the same, but there are minor differences in physical setup required. It will be indicated where a step applies only to a specific platform.
+=== Connect a display to DISP1/DSI1
-WARNING: The Raspberry Pi Zero camera cable cannot be used as an alternative to the RPI-DISPLAY adaptor, because its wiring is different.
+NOTE: The Raspberry Pi Zero camera cable cannot be used as an alternative to the RPI-DISPLAY adapter. The two cables have distinct wiring.
-WARNING: Please note that the display is *not* designed to be hot pluggable. It (and camera modules) should always be connected or disconnected with the power off.
+To connect a display to DISP1/DSI1:
-=== Quickstart Guide (Display Only)
-
-Connecting to DISP1
-
-. Connect the display to the DISP1 port on the Compute Module IO board through the 22W to 15W display adaptor.
-. (CM1 and CM3 only) Connect these pins together with jumper wires:
+. Disconnect the Compute Module from power.
+. Connect the display to the DISP1/DSI1 port on the Compute Module IO board through the 22W to 15W display adapter.
+. _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables:
+ * `0` to `CD1_SDA`
+ * `1` to `CD1_SCL`
+. _(CM5)_ On the Compute Module 5 IO board, add the appropriate jumpers to J6, as indicated on the silkscreen.
+. Reconnect the Compute Module to power.
+. Add the following line to xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]:
+
+[source,ini]
----
- GPIO0 - CD1_SDA
- GPIO1 - CD1_SCL
+dtoverlay=vc4-kms-dsi-7inch
----
+. Reboot your Compute Module with `sudo reboot`. Your device should detect and begin displaying output to your display.
-. Power up the Compute Module and run:
-+
-`+sudo wget https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-only.bin -O /boot/firmware/dt-blob.bin+`
+=== Connect a display to DISP0/DSI0
-. Reboot for the `dt-blob.bin` file to be read.
+To connect a display to DISP0/DSI0 on CM1, CM3 and CM4 IO boards:
+. Connect the display to the DISP0/DSI0 port on the Compute Module IO board through the 22W to 15W display adapter.
+. _(CM1, CM3, CM3+, and CM4S only)_: Connect the following GPIO pins with jumper cables:
+ * `28` to `CD0_SDA`
+ * `29` to `CD0_SCL`
-Connecting to DISP0
+ . _(CM4 only)_ On the Compute Module 4 IO board, add the appropriate jumpers to J6, as indicated on the silkscreen.
-. Connect the display to the DISP0 port on the Compute Module IO board through the 22W to 15W display adaptor.
-. (CM1 and CM3 only) Connect these pins together with jumper wires:
+. Reconnect the Compute Module to power.
+. Add the following line to `/boot/firmware/config.txt`:
+
+[source,ini]
----
- GPIO28 - CD0_SDA
- GPIO29 - CD0_SCL
+dtoverlay=vc4-kms-dsi-7inch
----
+. Reboot your Compute Module with `sudo reboot`. Your device should detect and begin displaying output to your display.
-. Power up the Compute Module and run:
-+
-`+sudo wget https://datasheets.raspberrypi.com/cmio/dt-blob-disp0-only.bin -O /boot/firmware/dt-blob.bin+`
-
-. Reboot for the `dt-blob.bin` file to be read.
+=== Disable touchscreen
-=== Quickstart Guide (Display and Cameras)
+The touchscreen requires no additional configuration. Connect it to your Compute Module, and both the touchscreen element and display should work once successfully detected.
-==== To enable the display and one camera:*
+To disable the touchscreen element, but still use the display, add the following line to `/boot/firmware/config.txt`:
-. Connect the display to the DISP1 port on the Compute Module IO board through the 22W to 15W display adaptor, called RPI-DISPLAY.
-. Connect the Camera Module to the CAM1 port on the Compute Module IO board through the 22W to 15W adaptor called RPI-CAMERA. Alternatively, the Raspberry Pi Zero camera cable can be used.
-. (CM1 and CM3 only) Connect these pins together with jumper wires:
-+
+[source,ini]
----
- GPIO0 - CD1_SDA
- GPIO1 - CD1_SCL
- GPIO2 - CAM1_IO1
- GPIO3 - CAM1_IO0
+disable_touchscreen=1
----
-+
-image:images/CMIO-Cam-Disp-GPIO.jpg[GPIO connection for a single display and Camera Modules]
- (Please note this image needs to be updated to have the extra jumper leads removed and use the standard wiring (2&3 not 4&5))
-
-. Power up the Compute Module and run:
-+
-`+sudo wget https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-cam1.bin -O /boot/firmware/dt-blob.bin+`
-. Reboot for the `dt-blob.bin` file to be read.
+=== Disable display
-==== To enable the display and both cameras:*
+To entirely ignore the display when connected, add the following line to `/boot/firmware/config.txt`:
-. Follow the steps for connecting the display and one camera above.
-. Connect the Camera Module to the CAM0 port on the Compute Module IO board through the 22W to 15W adaptor called RPI-CAMERA. Alternatively, the Raspberry Pi Zero camera cable can be used.
-. (CM1 and CM3 only) Add links:
-+
+[source,ini]
----
- GPIO28 - CD0_SDA
- GPIO29 - CD0_SCL
- GPIO30 - CAM0_IO1
- GPIO31 - CAM0_IO0
+ignore_lcd=1
----
-. (CM4 only) Add jumpers to J6.
-. Power up the Compute Module and run:
-+
-`+sudo wget https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-cam2.bin -O /boot/firmware/dt-blob.bin+`
-
-. Reboot for the `dt-blob.bin` file to be read.
-+
-image:images/CMIO-Cam-Disp-Example.jpg[Camera Preview on the 7 inch display]
- (Please note this image needs to be updated to show two Camera Modules and the standard wiring)
-
-=== Software Support
-
-There is no additional configuration required to enable the touchscreen. The touch interface should work out of the box once the screen is successfully detected.
-
-If you wish to disable the touchscreen element and only use the display side, you can add the command `disable_touchscreen=1` to xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`] to do so.
-
-To make the firmware to ignore the display even if connected, then add `ignore_lcd=1` to `/boot/firmware/config.txt`.
+== Attaching the Touch Display 2 LCD panel
-=== Firmware Configuration
+Touch Display 2 is a 720x1280 7" LCD display designed specifically for Raspberry Pi devices (see https://www.raspberrypi.com/products/touch-display-2/). It connects in the same way as the original touch display, but the software setup on Compute Modules is slightly different as it uses a different display driver. See xref:../accessories/touch-display-2.adoc[Touch Display 2] for connection details.
-The firmware looks at the dt-blob.bin file for the relevant configuration to use
-for the screen. It looks at the pin_number@ defines for
+Edit the /boot/firmware/config.txt file and add the following to enable Touch Display 2 on DISP1/DSI1. You will also need to add jumpers to J6 as indicated on the silkscreen.
+[source,ini]
----
-DISPLAY_I2C_PORT
-DISPLAY_SDA
-DISPLAY_SCL
-DISPLAY_DSI_PORT
+dtoverlay=vc4-kms-dsi-ili9881-7inch
----
-The I2C port, SDA and SCL pin numbers are self explanatory. DISPLAY_DSI_PORT
-selects between DSI1 (the default) and DSI0.
+To use DISP0/DSI0, use the following:
-Once all the required changes have been made to the `dts` file, it needs to be compiled and placed on the boot partition of the device.
-
-Instructions for doing this can be found on the xref:configuration.adoc#change-the-default-pin-configuration[Pin Configuration] page.
-
-==== Sources
-
-* https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-only.dts[dt-blob-disp1-only.dts]
-* https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-cam1.dts[dt-blob-disp1-cam1.dts]
-* https://datasheets.raspberrypi.com/cmio/dt-blob-disp1-cam2.dts[dt-blob-disp1-cam2.dts]
-* https://datasheets.raspberrypi.com/cmio/dt-blob-disp0-only.dts[dt-blob-disp0-only.dts] (Uses wiring as for CAM0)
+[source,ini]
+----
+dtoverlay=vc4-kms-dsi-ili9881-7inch,dsi0
+----
diff --git a/documentation/asciidoc/computers/compute-module/datasheet.adoc b/documentation/asciidoc/computers/compute-module/datasheet.adoc
index 250678ce4c..11d52ccb82 100644
--- a/documentation/asciidoc/computers/compute-module/datasheet.adoc
+++ b/documentation/asciidoc/computers/compute-module/datasheet.adoc
@@ -1,57 +1,84 @@
-== Datasheets and Schematics
+== Specifications
-=== Compute Module 4
+=== Compute Module 5 datasheet
-The latest version of the Compute Module is the Compute Module 4 (CM4). It is the recommended Compute Module for all current and future development.
+To learn more about Compute Module 5 (CM5) and its corresponding IO Board, see the following documents:
-* https://datasheets.raspberrypi.com/cm4/cm4-datasheet.pdf[Compute Module 4 Datasheet]
-* https://datasheets.raspberrypi.com/cm4io/cm4io-datasheet.pdf[Compute Module 4 IO Board Datasheet]
+* https://datasheets.raspberrypi.com/cm5/cm5-datasheet.pdf[CM5 datasheet]
+* https://rpltd.co/cm5-design-files[CM5 design files]
-NOTE: Schematics are not available for the Compute Module 4, but are available for the IO board. Schematics for the CMIO4 board are included in the datasheet.
+=== Compute Module 5 IO Board datasheet
-There is also a KiCad PCB design set available:
+Design data for the Compute Module 5 IO Board (CM5IO) can be found in its datasheet:
-* https://datasheets.raspberrypi.com/cm4io/CM4IO-KiCAD.zip[Compute Module 4 IO Board KiCad files]
+* https://datasheets.raspberrypi.com/cm5/cm5io-datasheet.pdf[CM5IO datasheet]
+* https://rpltd.co/cm5io-design-files[CM5IO design files]
-[.whitepaper, title="Configuring the Compute Module 4", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003470-WP/Configuring-the-Compute-Module-4.pdf]
+=== Compute Module 4 datasheet
+
+To learn more about Compute Module 4 (CM4) and its corresponding IO Board, see the following documents:
+
+* https://datasheets.raspberrypi.com/cm4/cm4-datasheet.pdf[CM4 datasheet]
+
+[.whitepaper, title="Configure the Compute Module 4", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003470-WP/Configuring-the-Compute-Module-4.pdf]
****
-The Raspberry Pi Compute Module 4 (CM 4) is available in a number of different hardware configurations. Sometimes it may be necessary to disable some of these features when they are not required.
+The Compute Module 4 is available in a number of different hardware configurations. Some use cases disable certain features that aren't required.
-This document describes how to disable various hardware interfaces, in both hardware and software, and how to reduce the amount of memory used by the Linux operating system (OS).
+This document describes how to disable various hardware and software interfaces.
****
-=== Older Products
+=== Compute Module 4 IO Board datasheet
-Raspberry Pi CM1, CM3 and CM3L are supported products with an End-of-Life (EOL) date no earlier than January 2026. The Compute Module 3+ offers improved thermal performance, and a wider range of Flash memory options.
+Design data for the Compute Module 4 IO Board (CM4IO) can be found in its datasheet:
-* https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf[Compute Module 1 and Compute Module 3]
+* https://datasheets.raspberrypi.com/cm4io/cm4io-datasheet.pdf[CM4IO datasheet]
-Raspberry Pi CM3+ and CM3+ Lite are supported prodicts with an End-of-Life (EOL) date no earlier than January 2026.
+We also provide a KiCad PCB design set for the CM4 IO Board:
-* https://datasheets.raspberrypi.com/cm/cm3-plus-datasheet.pdf[Compute Module 3+]
+* https://datasheets.raspberrypi.com/cm4io/CM4IO-KiCAD.zip[CM4IO KiCad files]
-Schematics for the Compute Module 1, 3 and 3L
+=== Compute Module 4S datasheet
-* https://datasheets.raspberrypi.com/cm/cm1-schematics.pdf[CM1 Rev 1.1]
-* https://datasheets.raspberrypi.com/cm/cm3-schematics.pdf[CM3 and CM3L Rev 1.0]
+Compute Module 4S (CM4S) offers the internals of CM4 in the DDR2-SODIMM form factor of CM1, CM3, and CM3+. To learn more about CM4S, see the following documents:
-Schematics for the Compute Module IO board (CMIO):
+* https://datasheets.raspberrypi.com/cm4s/cm4s-datasheet.pdf[CM4S datasheet]
-* https://datasheets.raspberrypi.com/cmio/cmio-schematics.pdf[CMIO Rev 3.0] (Supports CM1, CM3, CM3L, CM3+ and CM3+L)
+=== Compute Module 3+ datasheet
-Schematics for the Compute Module camera/display adapter board (CMCDA):
+Compute Module 3+ (CM3+) is a supported product with an end-of-life (EOL) date no earlier than January 2028. To learn more about CM3+ and its corresponding IO Board, see the following documents:
-* https://datasheets.raspberrypi.com/cmcda/cmcda-schematics.pdf[CMCDA Rev 1.1]
+* https://datasheets.raspberrypi.com/cm/cm3-plus-datasheet.pdf[CM3+ datasheet]
-[.whitepaper, title="Transitioning from CM3 to CM4", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003469-WP/Transitioning-from-CM3-to-CM4.pdf]
-****
-This whitepaper is for those who wish to move from using a Raspberry Pi Compute Module (CM) 1 or 3 to a Raspberry Pi CM 4.
+=== Compute Module 1 and Compute Module 3 datasheet
-From a software perspective, the move from Raspberry Pi CM 1/3 to Raspberry Pi CM 4 is relatively painless, as Raspberry Pi OS should work on all platforms.
+Raspberry Pi Compute Module 1 (CM1) and Compute Module 3 (CM3) are supported products with an end-of-life (EOL) date no earlier than January 2026. To learn more about CM1 and CM3, see the following documents:
+
+* https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf[CM1 and CM3 datasheet]
+* https://datasheets.raspberrypi.com/cm/cm1-schematics.pdf[Schematics for CM1]
+* https://datasheets.raspberrypi.com/cm/cm3-schematics.pdf[Schematics for CM3]
+
+[.whitepaper, title="Transition from Compute Module 1 or Compute Module 3 to Compute Module 4", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003469-WP/Transitioning-from-CM3-to-CM4.pdf]
+****
+This white paper helps developers migrate from Compute Module 1 or Compute Module 3 to Compute Module 4.
****
-==== Under Voltage Detection
+=== Compute Module IO Board schematics
+
+The Compute Module IO Board (CMIO) provides a variety of interfaces for CM1, CM3, CM3+, and CM4S. The Compute Module IO Board comes in two variants: Version 1 and Version 3. Version 1 is only compatible with CM1. Version 3 is compatible with CM1, CM3, CM3+, and CM4S. Compute Module IO Board Version 3 is sometimes written as the shorthand CMIO3. To learn more about CMIO1 and CMIO3, see the following documents:
+
+* https://datasheets.raspberrypi.com/cmio/cmio-schematics.pdf[Schematics for CMIO]
+* https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R1P2.zip[Design documents for CMIO Version 1.2 (CMIO/CMIO1)]
+* https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R3P0.zip[Design documents for CMIO Version 3.0 (CMIO3)]
+
+=== Compute Module Camera/Display Adapter Board schematics
+
+The Compute Module Camera/Display Adapter Board (CMCDA) provides camera and display interfaces for Compute Modules. To learn more about the CMCDA, see the following documents:
+
+* https://datasheets.raspberrypi.com/cmcda/cmcda-schematics.pdf[Schematics for the CMCDA]
+* https://datasheets.raspberrypi.com/cmcda/RPi-CMCDA-1P1.zip[Design documents for CMCDA Version 1.1]
+
+=== Under-voltage detection
-Schematic for an under-voltage detection circuit, as used in older models of Raspberry Pi:
+The following schematic describes an under-voltage detection circuit, as used in older models of Raspberry Pi:
image::images/under_voltage_detect.png[Under-voltage detect]
diff --git a/documentation/asciidoc/computers/compute-module/designfiles.adoc b/documentation/asciidoc/computers/compute-module/designfiles.adoc
deleted file mode 100644
index c70485415c..0000000000
--- a/documentation/asciidoc/computers/compute-module/designfiles.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-== Design Files for CMIO Boards
-
-[discrete]
-=== Compute Module IO board for CM4
-
-Design data for the Compute Module 4 IO board can be found in its datasheet:
-
-* https://datasheets.raspberrypi.com/cm4io/cm4io-datasheet.pdf[Compute Module 4 IO Board datasheet]
-
-There is also a KiCad PCB design set available:
-
-* https://datasheets.raspberrypi.com/cm4io/CM4IO-KiCAD.zip[Compute Module 4 IO Board KiCad files]
-
-[discrete]
-=== Older Products
-
-* https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R1P2.zip[CMIO Rev 1.2]
-* https://datasheets.raspberrypi.com/cmio/RPi-CMIO-R3P0.zip[CMIO Rev 3.0]
-
-Design data for the Compute Module camera/display adapter board (CMCDA):
-
-* https://datasheets.raspberrypi.com/cmcda/RPi-CMCDA-1P1.zip[CMCDA Rev 1.1]
diff --git a/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-Example.jpg b/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-Example.jpg
deleted file mode 100644
index c7c8a60c2c..0000000000
Binary files a/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-Example.jpg and /dev/null differ
diff --git a/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-GPIO.jpg b/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-GPIO.jpg
deleted file mode 100644
index e5cbdd81f9..0000000000
Binary files a/documentation/asciidoc/computers/compute-module/images/CMIO-Cam-Disp-GPIO.jpg and /dev/null differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm1.jpg b/documentation/asciidoc/computers/compute-module/images/cm1.jpg
new file mode 100644
index 0000000000..caa01fec3a
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm1.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm3-plus.jpg b/documentation/asciidoc/computers/compute-module/images/cm3-plus.jpg
new file mode 100644
index 0000000000..dc266211b8
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm3-plus.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm3.jpg b/documentation/asciidoc/computers/compute-module/images/cm3.jpg
new file mode 100644
index 0000000000..c82500604a
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm3.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-assembly.svg b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-assembly.svg
new file mode 100644
index 0000000000..596cda0127
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-assembly.svg
@@ -0,0 +1,297 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 5
+
+
+
+
+ 3
+
+
+
+
+
+ 1
+
+
+
+
+
+ 2
+
+
+
+
+ 4
+
\ No newline at end of file
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.png b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.png
new file mode 100644
index 0000000000..7fcd0da44e
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.svg b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.svg
new file mode 100644
index 0000000000..232dc6e76b
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna-physical.svg
@@ -0,0 +1,4711 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 8
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 10
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 87.5 ± 1
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 1/4–36UNS–2B
+ 1/4–36UNS–2A
+ 11
+
+ Milling unilateral 5.85 ± 0.02
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2.0
+ 205 ± 1
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ S=8
+
+
+ 6.25
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Note: All dimensions in mm All dimensions are app ro ximate and for reference purposes only. The dimensions shown should not be used for p r oducing p r oduction data The dimensions are subject t o pa r t and manufacturing t ole r ances Dimensions may be subject t o change
+
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna.jpg b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna.jpg
new file mode 100644
index 0000000000..2dd3fbcd74
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm4-cm5-antenna.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4.jpg b/documentation/asciidoc/computers/compute-module/images/cm4.jpg
new file mode 100644
index 0000000000..a60f5b73bf
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm4.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4io.jpg b/documentation/asciidoc/computers/compute-module/images/cm4io.jpg
new file mode 100644
index 0000000000..fe4ccab2bc
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm4io.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm4s.jpg b/documentation/asciidoc/computers/compute-module/images/cm4s.jpg
new file mode 100644
index 0000000000..7119617d8e
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm4s.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.png b/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.png
new file mode 100644
index 0000000000..05323596a7
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.svg b/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.svg
new file mode 100644
index 0000000000..4ddf6308f6
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/images/cm5-case-physical.svg
@@ -0,0 +1,12074 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ SSD
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Power In
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+S TA TUS
+Power
+HDMI0
+HDMI1
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+94
+
+170
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+28
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Note: All dimensions in mm All dimensions are approximate and for reference purposes only. The dimensions shown should not be used for producing production data The dimensions are subject to part and manufacturing tolerances Dimensions may be subject to change
+
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.png b/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.png
new file mode 100644
index 0000000000..5214101780
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.svg b/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.svg
new file mode 100644
index 0000000000..5abb017d82
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/images/cm5-cooler-physical.svg
@@ -0,0 +1,9616 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 41
+ 56
+
+
+
+
+
+
+
+
+
+
+ 33
+ 4 × M2.5
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 10
+ 2.7
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 48
+
+
+
+
+
+
+
+
+
+
+ Note:
+ All dimensions in mm
+ All dimensions are app
+ ro
+ ximate and for
+ reference purposes only.
+
+ The dimensions
+ shown should not be used for p
+ r
+ oducing
+ p
+ r
+ oduction data
+ The dimensions are subject
+ t
+ o pa
+ r
+ t and
+ manufacturing
+ t
+ ole
+ r
+ ances
+ Dimensions may be subject
+ t
+ o change
+
+
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5-cooler.jpg b/documentation/asciidoc/computers/compute-module/images/cm5-cooler.jpg
new file mode 100644
index 0000000000..d4781a5cd4
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5-cooler.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5.png b/documentation/asciidoc/computers/compute-module/images/cm5.png
new file mode 100644
index 0000000000..0431e3e2d1
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5io-case-front.png b/documentation/asciidoc/computers/compute-module/images/cm5io-case-front.png
new file mode 100644
index 0000000000..055875438a
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5io-case-front.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5io-case.png b/documentation/asciidoc/computers/compute-module/images/cm5io-case.png
new file mode 100644
index 0000000000..074e802b66
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5io-case.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cm5io.png b/documentation/asciidoc/computers/compute-module/images/cm5io.png
new file mode 100644
index 0000000000..382ae0b2c0
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cm5io.png differ
diff --git a/documentation/asciidoc/computers/compute-module/images/cmio.jpg b/documentation/asciidoc/computers/compute-module/images/cmio.jpg
new file mode 100644
index 0000000000..347f27f286
Binary files /dev/null and b/documentation/asciidoc/computers/compute-module/images/cmio.jpg differ
diff --git a/documentation/asciidoc/computers/compute-module/introduction.adoc b/documentation/asciidoc/computers/compute-module/introduction.adoc
new file mode 100644
index 0000000000..aa74d7bd58
--- /dev/null
+++ b/documentation/asciidoc/computers/compute-module/introduction.adoc
@@ -0,0 +1,232 @@
+== Compute Modules
+
+Raspberry Pi Compute Modules are **system-on-module** variants of the flagship Raspberry Pi models. Compute Modules are especially popular for industrial and commercial applications, including digital signage, thin clients, and process automation. Some of these applications use the flagship Raspberry Pi design, but many users want a more compact design or on-board eMMC storage.
+
+Compute Modules come in multiple variants, varying both in memory and soldered-on Multi-Media Card (eMMC) flash storage capacity. Like SD cards, eMMC provides persistent storage with minimal energy impact. Unlike SD cards, eMMC is specifically designed to be used as a disk and includes extra features to improve reliability. **Lite** models have no on-board storage, and are sometimes referred to with the shorthand suffix **L**, e.g. "CM3L".
+
+Compute Modules use the following Raspberry Pi SoCs:
+
+* BCM2835 for CM1
+* BCM2837 for CM3, CM3+
+* BCM2711 for CM4, CM4S
+* BCM2712 for CM5
+
+=== Compute Module 5
+
+.Compute Module 5
+image::images/cm5.png[alt="Compute Module 5", width="60%"]
+
+The Compute Module 5 (CM5) combines the internals of a Raspberry Pi 5 (the BCM2712 processor and 2GB, 4GB, 8GB, or 16GB of RAM) with optional 0GB (Lite), 16GB, 32GB or 64GB of eMMC flash storage.
+
+CM5 uses the same form factor as CM4, featuring two 100-pin high density connectors.
+
+=== Compute Module 4
+
+.Compute Module 4
+image::images/cm4.jpg[alt="Compute Module 4", width="60%"]
+
+The Compute Module 4 (CM4) combines the internals of a Raspberry Pi 4 (the BCM2711 processor and 1GB, 2GB, 4GB, or 8GB of RAM) with an optional 0GB (Lite), 8GB, 16GB or 32GB of eMMC flash storage.
+
+Unlike CM1, CM3, and CM3+, CM4 does not use the DDR2 SO-DIMM form factor. Instead, CM4 uses two 100-pin high density connectors in a smaller physical footprint. This change helped add the following interfaces:
+
+* an additional second HDMI port
+* PCIe
+* Ethernet
+
+The previous form factor could not have supported these interfaces.
+
+=== Compute Module 4S
+
+.Compute Module 4S
+image::images/cm4s.jpg[alt="Compute Module 4S", width="60%"]
+
+The Compute Module 4S (CM4S) combines the internals of a Raspberry Pi 4 (the BCM2711 processor and 1GB, 2GB, 4GB, or 8GB of RAM) with an optional 0GB (Lite), 8GB, 16GB or 32GB of eMMC flash storage. Unlike CM4, CM4S comes in the same DDR2 SO-DIMM form factor as CM1, CM3, and CM3+.
+
+[[compute-module-3-plus]]
+=== Compute Module 3+
+
+.Compute Module 3+
+image::images/cm3-plus.jpg[alt="Compute Module 3+", width="60%"]
+
+The Compute Module 3+ (CM3+) combines the internals of a Raspberry Pi 3 Model B+ (the BCM2837 processor and 1GB of RAM) with an optional 0GB (Lite), 8GB, 16GB or 32GB of eMMC flash storage. CM3+ comes in the DDR2 SO-DIMM form factor.
+
+=== Compute Module 3
+
+.Compute Module 3
+image::images/cm3.jpg[alt="Compute Module 3", width="60%"]
+
+The Compute Module 3 (CM3) combines the internals of a Raspberry Pi 3 (the BCM2837 processor and 1GB of RAM) with an optional 4GB of eMMC flash storage. CM3 comes in the DDR2 SO-DIMM form factor.
+
+=== Compute Module 1
+
+.Compute Module 1
+image::images/cm1.jpg[alt="Compute Module 1", width="60%"]
+
+The Compute Module 1 (CM1) contains the internals of a Raspberry Pi (the BCM2835 processor and 512MB of RAM) as well as an optional 4GB of eMMC flash storage. CM1 comes in the DDR2 SO-DIMM form factor.
+
+== IO Boards
+
+Raspberry Pi IO Boards provide a way to connect a single Compute Module to a variety of I/O (input/output) interfaces. Compute Modules are small, lacking ports and connectors. IO Boards provide a way to connect Compute Modules to a variety of peripherals.
+
+Raspberry Pi IO Boards provide the following functionality:
+
+* power the module
+* connects the GPIO to pin headers
+* connects the camera and display interfaces to FFC connectors
+* connects HDMI to HDMI ports
+* connects USB to USB ports
+* connects activity monitoring to LEDs
+* eMMC programming over USB
+* connects PCIe to connectors used to physically connect storage or peripherals
+
+IO Boards are breakout boards intended for development or personal use; in production, you should use a smaller, potentially custom board that provides only the ports and peripherals required for your use-case.
+
+=== Compute Module 5 IO Board
+
+.Compute Module 5 IO Board
+image::images/cm5io.png[alt="Compute Module 5 IO Board", width="60%"]
+
+Compute Module 5 IO Board provides the following interfaces:
+
+* HAT footprint with 40-pin GPIO connector
+* PoE header
+* 2× HDMI ports
+* 2× USB 3.0 ports
+* Gigabit Ethernet RJ45 with PoE support
+* M.2 M key PCIe socket compatible with the 2230, 2242, 2260, and 2280 form factors
+* microSD card slot (only for use with Lite variants with no eMMC; other variants ignore the slot)
+* 2× MIPI DSI/CSI-2 combined display/camera FPC connectors (22-pin 0.5 mm pitch cable)
+* Real-time clock with battery socket
+* four-pin JST-SH PWM fan connector
+* USB-C power using the same standard as Raspberry Pi 5 (5V, 5A (25W) or 5V, 3A (15W) with a 600mA peripheral limit)
+* Jumpers to disable features such as eMMC boot, EEPROM write, and the USB OTG connection
+
+=== Compute Module 4 IO Board
+
+.Compute Module 4 IO Board
+image::images/cm4io.jpg[alt="Compute Module 4 IO Board", width="60%"]
+
+Compute Module 4 IO Board provides the following interfaces:
+
+* HAT footprint with 40-pin GPIO connector and PoE header
+* 2× HDMI ports
+* 2× USB 2.0 ports
+* Gigabit Ethernet RJ45 with PoE support
+* microSD card slot (only for use with Lite variants with no eMMC; other variants ignore the slot)
+* PCIe Gen 2 socket
+* micro USB upstream port
+* 2× MIPI DSI display FPC connectors (22-pin 0.5 mm pitch cable)
+* 2× MIPI CSI-2 camera FPC connectors (22-pin 0.5 mm pitch cable)
+* Real-time clock with battery socket
+* 12V input via barrel jack (supports up to 26V if PCIe unused)
+
+=== Compute Module IO Board
+
+.Compute Module IO Board
+image::images/cmio.jpg[alt="Compute Module IO Board", width="60%"]
+
+Compute Module IO Board provides the following interfaces:
+
+* 120 GPIO pins
+* HDMI port
+* USB-A port
+* 2× MIPI DSI display FPC connectors (22-pin 0.5 mm pitch cable)
+* 2× MIPI CSI-2 camera FPC connectors (22-pin 0.5 mm pitch cable)
+
+The Compute Module IO Board comes in two variants: Version 1 and Version 3. Version 1 is only compatible with CM1. Version 3 is compatible with CM1, CM3, CM3+, and CM4S. Compute Module IO Board Version 3 is sometimes written as the shorthand CMIO3.
+
+Compute Module IO Board Version 3 added a microSD card slot that did not exist in Compute Module IO Board Version 1.
+
+=== IO Board compatibility
+
+Not all Compute Module IO Boards work with all Compute Module models. The following table shows which Compute Modules work with each IO Board:
+
+[cols="1,1"]
+|===
+| IO Board | Compatible Compute Modules
+
+| Compute Module IO Board Version 1 (CMIO)/(CMIO1)
+a|
+* CM1
+| Compute Module IO Board Version 3 (CMIO)/(CMIO3)
+a|
+* CM1
+* CM3
+* CM3+
+* CM4S
+| Compute Module 4 IO Board (CM4IO)
+a|
+* CM4
+* CM5 (with reduced functionality)
+| Compute Module 5 IO Board (CM5IO)
+a|
+* CM5
+* CM4 (with reduced functionality)
+|===
+
+== CM5 Accessories
+
+=== IO Case
+
+The world can be a dangerous place. The Compute Module 5 IO Board Case provides physical protection for a CM5IO Board.
+
+.Compute Module 5 IO Board Case
+image::images/cm5io-case.png[alt="Compute Module 5 IO Board Case", width="60%"]
+
+The Case provides cut-outs for all externally-facing ports and LEDs on the CM5IO Board, and an attachment point for a Raspberry Pi Antenna Kit.
+
+.Compute Module 5 IO Board Case ports
+image::images/cm5io-case-front.png[alt="the port selection on the Compute Module 5 IO Board Case", width="60%"]
+
+To mount a CM5IO Board within your Case, position your Board in the bottom section of the case, aligning the four mounting points inset slightly from each corner of the Board. Fasten four screws into the mounting points. Take care not to over-tighten the screws.
+
+To use the Case fan, connect the fan cable to the FAN (J14) port on the Board.
+
+To close the case, put the top section of the case on top of the bottom section of the case. Facing the front of the case, which has port pass-throughs, carefully align the screw holes on the left and right side of the case and the power button on the back of the case. Tighten four screws into the screw holes. Take care not to over-tighten the screws.
+
+TIP: The Case comes with a fan pre-installed. To close the case with the passive Cooler attached to your Compute Module, remove the fan. To remove the fan, remove the four screws positioned in the corners of the fan from the bottom of the top case.
+
+.CM5 Case physical specification
+image::images/cm5-case-physical.png[alt="CM5 Case physical specification", width="80%"]
+
+=== Antenna
+
+The Raspberry Pi Antenna Kit provides a certified external antenna to boost wireless reception on a CM4 or CM5.
+
+.CM4 and CM5 Antenna
+image::images/cm4-cm5-antenna.jpg[alt="The Antenna, connected to CM4", width="60%"]
+
+To attach the Antenna to your Compute Module and Case, complete the following steps:
+
+. Connect the https://en.wikipedia.org/wiki/Hirose_U.FL[U.FL connector] on the cable to the U.FL-compatible connector on your Compute Module.
+. Secure the toothed washer onto the male SMA connector at the end of the cable, then insert the SMA connector, with the antenna facing outward, through the hole in the Case.
+. Fasten the SMA connector into place with the retaining hexagonal nut and washer.
+. Tighten the female SMA connector on the Antenna onto the male SMA connector.
+. Adjust the Antenna to its final position by turning it up to 90°.
+
+.CM4 and CM5 Antenna assembly diagram
+image::images/cm4-cm5-antenna-assembly.svg[alt="CM4 and CM5 antenna assembly diagram", width="60%"]
+
+To **use** the Antenna with your Compute Module, add a `dtoverlay` instruction in xref:../computers/config_txt.adoc[`/boot/firmware/config.txt`]. Add the following line to the end of `config.txt`:
+
+[source,ini]
+----
+dtparam=ant2
+----
+
+.CM4 and CM5 Antenna physical specification
+image::images/cm4-cm5-antenna-physical.png[alt="CM4 and CM5 antenna physical specification", width="80%"]
+
+=== Cooler
+
+The CM5 Cooler helps dissipate heat from your CM5, improving CPU performance, longevity, and bumpiness.
+
+.CM5 Cooler
+image::images/cm5-cooler.jpg[alt="CM5 Cooler", width="60%"]
+
+To mount the Cooler to your CM5, attach the thermally conductive silicone at the bottom of the Cooler to the top of your CM5. Align the cut-out in the heatsink with the antenna https://en.wikipedia.org/wiki/Hirose_U.FL[U.FL connector]. Optionally, fasten screws in the mounting points found in each corner to secure the Cooler. If you omit the screws, the bond between your Cooler and your Compute Module will improve through time, use, and trust.
+
+.CM5 Cooler physical specification
+image::images/cm5-cooler-physical.png[alt="CM5 Cooler physical specification", width="80%"]
+
+NOTE: The CM5 Cooler is only compatible with the CM5IO Case if you remove the fan from the case.
diff --git a/documentation/asciidoc/computers/config_txt.adoc b/documentation/asciidoc/computers/config_txt.adoc
index 6d99af772f..500831113e 100644
--- a/documentation/asciidoc/computers/config_txt.adoc
+++ b/documentation/asciidoc/computers/config_txt.adoc
@@ -20,7 +20,5 @@ include::config_txt/codeclicence.adoc[]
include::config_txt/video.adoc[]
-include::config_txt/pi4-hdmi.adoc[]
-
include::config_txt/camera.adoc[]
diff --git a/documentation/asciidoc/computers/config_txt/audio.adoc b/documentation/asciidoc/computers/config_txt/audio.adoc
index 31f361306d..7ba0b541de 100644
--- a/documentation/asciidoc/computers/config_txt/audio.adoc
+++ b/documentation/asciidoc/computers/config_txt/audio.adoc
@@ -1,4 +1,4 @@
-== Onboard Analogue Audio (3.5mm Jack)
+== Onboard analogue audio (3.5mm jack)
The onboard audio output uses config options to change the way the analogue audio is driven, and whether some firmware features are enabled or not.
@@ -8,11 +8,11 @@ The onboard audio output uses config options to change the way the analogue audi
`audio_pwm_mode=2` (the default) selects high quality analogue audio using an advanced modulation scheme.
-NOTE: This option uses more GPU compute resources and can interfere with some use cases.
+NOTE: This option uses more GPU compute resources and can interfere with some use cases on some models.
=== `disable_audio_dither`
-By default, a 1.0LSB dither is applied to the audio stream if it is routed to the analogue audio output. This can create audible background "hiss" in some situations, for example when the ALSA volume is set to a low level. Set `disable_audio_dither` to `1` to disable dither application.
+By default, a 1.0LSB dither is applied to the audio stream if it is routed to the analogue audio output. This can create audible background hiss in some situations, for example when the ALSA volume is set to a low level. Set `disable_audio_dither` to `1` to disable dither application.
=== `enable_audio_dither`
@@ -21,3 +21,15 @@ Audio dither (see disable_audio_dither above) is normally disabled when the audi
=== `pwm_sample_bits`
The `pwm_sample_bits` command adjusts the bit depth of the analogue audio output. The default bit depth is `11`. Selecting bit depths below `8` will result in nonfunctional audio, as settings below `8` result in a PLL frequency too low to support. This is generally only useful as a demonstration of how bit depth affects quantisation noise.
+
+== HDMI audio
+
+By default, HDMI audio output is enabled on all Raspberry Pi models with HDMI output.
+
+To disable HDMI audio output, append `,noaudio` to the end of the `dtoverlay=vc4-kms-v3d` line in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]:
+
+[source,ini]
+----
+dtoverlay=vc4-kms-v3d,noaudio
+----
+
diff --git a/documentation/asciidoc/computers/config_txt/autoboot.adoc b/documentation/asciidoc/computers/config_txt/autoboot.adoc
index f8f40632e0..fa37c855e4 100644
--- a/documentation/asciidoc/computers/config_txt/autoboot.adoc
+++ b/documentation/asciidoc/computers/config_txt/autoboot.adoc
@@ -17,20 +17,24 @@ Bootable partitions must be formatted as FAT12, FAT16 or FAT32 and contain a `st
=== The `[tryboot]` filter
This filter passes if the system was booted with the `tryboot` flag set.
+
+[source,console]
----
-sudo reboot "0 tryboot"
+$ sudo reboot "0 tryboot"
----
=== `tryboot_a_b`
Set this property to `1` to load the normal `config.txt` and `boot.img` files instead of `tryboot.txt` and `tryboot.img` when the `tryboot` flag is set.
-This enables the `tryboot` switch to be made at the partition level rather than the file-level without having to modify configuration files in the A/B partitions.
+This enables the `tryboot` switch to be made at the partition level rather than the file-level without having to modify configuration files in the A/B partitions.
=== Example update flow for A/B booting
-The following pseudo-code shows how a hypothetical OS `Update Service` could use `tryboot` + `autoboot.txt` to perform a fail-safe OS upgrade.
+The following pseudo-code shows how a hypothetical OS `Update service` could use `tryboot` in `autoboot.txt` to perform a fail-safe OS upgrade.
-Initial `autoboot.txt`
+Initial `autoboot.txt`:
+
+[source,ini]
----
[all]
tryboot_a_b=1
@@ -41,27 +45,29 @@ boot_partition=3
**Installing the update**
-* System is powered on and boots to partition 2 by default.
-* An `Update Service` downloads the next version of the OS to partition 3.
-* The update is tested by rebooting to `tryboot` mode `reboot "0 tryboot"` where `0` means the default partition.
+* System is powered on and boots to partition 2 by default
+* An `Update service` downloads the next version of the OS to partition 3
+* The update is tested by rebooting to `tryboot` mode `reboot "0 tryboot"` where `0` means the default partition
**Committing or cancelling the update**
-* System boots from partition 3 because the `[tryboot]` filter evaluates to true in `tryboot mode`.
+* System boots from partition 3 because the `[tryboot]` filter evaluates to true in `tryboot mode`
* If tryboot is active (`/proc/device-tree/chosen/bootloader/tryboot == 1`)
** If the current boot partition (`/proc/device-tree/chosen/bootloader/partition`) matches the `boot_partition` in the `[tryboot]` section of `autoboot.txt`
- *** The `Update Service` validates the system to verify that the update was successful.
+ *** The `Update Service` validates the system to verify that the update was successful
*** If the update was successful
- **** Replace `autoboot.txt` swapping the `boot_partition` configuration.
- **** Normal reboot - partition 3 is now the default boot partition.
+ **** Replace `autoboot.txt` swapping the `boot_partition` configuration
+ **** Normal reboot - partition 3 is now the default boot partition
*** Else
**** `Update Service` marks the update as failed e.g. it removes the update files.
- **** Normal reboot - partition 2 is still the default boot partition because the `tryboot` flag is automatically cleared.
+ **** Normal reboot - partition 2 is still the default boot partition because the `tryboot` flag is automatically cleared
*** End if
** End If
* End If
-Updated `autoboot.txt`
+Updated `autoboot.txt`:
+
+[source,ini]
----
[all]
tryboot_a_b=1
@@ -70,6 +76,7 @@ boot_partition=3
boot_partition=2
----
-**Notes**
-* It's not mandatory to reboot after updating `autoboot.txt`. However, the `Update Service` must be careful to avoid overwriting the current partition since `autoboot.txt` has already been modified to commit the last update.
-* See also: xref:configuration.adoc#device-trees-overlays-and-parameters[Device-tree parameters].
+[NOTE]
+======
+It's not mandatory to reboot after updating `autoboot.txt`. However, the `Update Service` must be careful to avoid overwriting the current partition since `autoboot.txt` has already been modified to commit the last update. For more information, see xref:configuration.adoc#device-trees-overlays-and-parameters[Device Tree parameters].
+======
diff --git a/documentation/asciidoc/computers/config_txt/boot.adoc b/documentation/asciidoc/computers/config_txt/boot.adoc
index 18dc76996e..1d778deb47 100644
--- a/documentation/asciidoc/computers/config_txt/boot.adoc
+++ b/documentation/asciidoc/computers/config_txt/boot.adoc
@@ -5,11 +5,13 @@
These options specify the firmware files transferred to the VideoCore GPU prior to booting.
`start_file` specifies the VideoCore firmware file to use.
-`fixup_file` specifies the file used to fix up memory locations used in the `start_file` to match the GPU memory split. Note that the `start_file` and the `fixup_file` are a matched pair - using unmatched files will stop the board from booting. This is an advanced option, so we advise that you use `start_x` and `start_debug` rather than this option.
+`fixup_file` specifies the file used to fix up memory locations used in the `start_file` to match the GPU memory split.
-NOTE: Cut-down firmware (`start*cd.elf` and `fixup*cd.dat`) cannot be selected this way - the system will fail to boot. The only way to enable the cut-down firmware is to specify `gpu_mem=16`. The cut-down firmware removes support for codecs and 3D as well as limiting the initial early-boot framebuffer to 1080p @ 16bpp - although KMS can replace this with up-to 32bpp 4K framebuffer(s) at a later stage as with any firmware.
+The `start_file` and the `fixup_file` are a matched pair - using unmatched files will stop the board from booting. This is an advanced option, so we advise that you use `start_x` and `start_debug` rather than this option.
-NOTE: The Raspberry Pi 5 firmware is self-contained in the bootloader EEPROM.
+NOTE: Cut-down firmware (`start*cd.elf` and `fixup*cd.dat`) cannot be selected this way - the system will fail to boot. The only way to enable the cut-down firmware is to specify `gpu_mem=16`. The cut-down firmware removes support for codecs, 3D and debug logging as well as limiting the initial early-boot framebuffer to 1080p @16bpp - although KMS can replace this with up to 32bpp 4K framebuffer(s) at a later stage as with any firmware.
+
+NOTE: The Raspberry Pi 5, Compute Module 5, and Raspberry Pi 500 firmware is self-contained in the bootloader EEPROM.
=== `cmdline`
@@ -17,9 +19,9 @@ NOTE: The Raspberry Pi 5 firmware is self-contained in the bootloader EEPROM.
=== `kernel`
-`kernel` is the alternative filename on the boot partition to use when loading the kernel. The default value on the Raspberry Pi 1, Zero and Zero W, and Raspberry Pi Compute Module 1 is `kernel.img`. The default value on the Raspberry Pi 2, 3, 3+ and Zero 2 W, and Raspberry Pi Compute Modules 3 and 3+ is `kernel7.img`. The default value on the Raspberry Pi 4 and 400, and Raspberry Pi Compute Module 4 is `kernel8.img`, or `kernel7l.img` if `arm_64bit` is set to 0.
+`kernel` is the alternative filename on the boot partition for loading the kernel. The default value on the Raspberry Pi 1, Zero and Zero W, and Raspberry Pi Compute Module 1 is `kernel.img`. The default value on the Raspberry Pi 2, 3, 3+ and Zero 2 W, and Raspberry Pi Compute Modules 3 and 3+ is `kernel7.img`. The default value on the Raspberry Pi 4 and 400, and Raspberry Pi Compute Module 4 is `kernel8.img`, or `kernel7l.img` if `arm_64bit` is set to 0.
-The Raspberry Pi 5 firmware defaults to loading `kernel_2712.img` because this image contains optimisations specific to Raspberry Pi 5 (e.g. 16K page-size). If this file is not present then the common 64-bit kernel (`kernel8.img`) will be loaded instead.
+The Raspberry Pi 5, Compute Module 5, and Raspberry Pi 500 firmware defaults to loading `kernel_2712.img` because this image contains optimisations specific to those models (e.g. 16K page-size). If this file is not present, then the common 64-bit kernel (`kernel8.img`) will be loaded instead.
=== `arm_64bit`
@@ -27,13 +29,30 @@ If set to 1, the kernel will be started in 64-bit mode. Setting to 0 selects 32-
In 64-bit mode, the firmware will choose an appropriate kernel (e.g. `kernel8.img`), unless there is an explicit `kernel` option defined, in which case that is used instead.
-Defaults to 1 on Pi 4s (Pi 4B, Pi 400, CM4 and CM4S), and 0 on all other platforms. However, if the name given in an explicit `kernel` option matches one of the known kernels then `arm_64bit` will be set accordingly.
+Defaults to 1 on Raspberry Pi 4, 400 and Compute Module 4, 4S platforms. Defaults to 0 on all other platforms. However, if the name given in an explicit `kernel` option matches one of the known kernels then `arm_64bit` will be set accordingly.
+
+64-bit kernels come in the following forms:
+
+* uncompressed image files
+* gzip archives of an image
-NOTE: 64-bit kernels may be uncompressed image files or a gzip archive of an image (which can still be called kernel8.img; the bootloader will recognize the archive from the signature bytes at the beginning).
+Both forms may use the `img` file extension; the bootloader recognizes archives using signature bytes at the start of the file.
-NOTE: The 64-bit kernel will only work on the Raspberry Pi 3, 3+, 4, 400, Zero 2 W and 2B rev 1.2, and Raspberry Pi Compute Modules 3, 3+ and 4.
+The following Raspberry Pi models support this flag:
-NOTE: Raspberry Pi 5 only supports 64-bit kernel so this parameter has been removed.
+* 2B rev 1.2
+* 3B
+* 3A+
+* 3B+
+* 4B
+* 400
+* Zero 2 W
+* Compute Module 3
+* Compute Module 3+
+* Compute Module 4
+* Compute Module 4S
+
+Flagship models since Raspberry Pi 5, Compute Modules since CM5, and Keyboard models since Pi 500 _only_ support the 64-bit kernel. Models that only support a 64-bit kernel ignore this flag.
=== `ramfsfile`
@@ -45,20 +64,22 @@ NOTE: Newer firmware supports the loading of multiple `ramfs` files. You should
`ramfsaddr` is the memory address to which the `ramfsfile` should be loaded.
+[[initramfs]]
=== `initramfs`
The `initramfs` command specifies both the ramfs filename *and* the memory address to which to load it. It performs the actions of both `ramfsfile` and `ramfsaddr` in one parameter. The address can also be `followkernel` (or `0`) to place it in memory after the kernel image. Example values are: `initramfs initramf.gz 0x00800000` or `initramfs init.gz followkernel`. As with `ramfsfile`, newer firmwares allow the loading of multiple files by comma-separating their names.
-NOTE: This option uses different syntax from all the other options, and you should not use a `=` character here.
+NOTE: This option uses different syntax from all the other options, and you should not use the `=` character here.
+[[auto_initramfs]]
=== `auto_initramfs`
-If `auto_initramfs` is set to `1`, look for an initramfs file using the same rules as the kernel selection.
+If `auto_initramfs` is set to `1`, the firmware looks for an `initramfs` file to match the kernel. The file must be in the same location as the kernel image, and the name is derived from the name of the kernel by replacing the `kernel` prefix with `initramfs`, and removing any extension such as `.img`, e.g. `kernel8.img` requires `initramfs8`. You can make use of `auto_initramfs` with custom kernel names provided the names begin with `kernel` and `initramfs` respectively and everything else matches (except for the absence of the file extension on the initramfs). Otherwise, an explicit xref:config_txt.adoc#initramfs[`initramfs`] statement is required.
[[disable_poe_fan]]
=== `disable_poe_fan`
-By default, a probe on the I2C bus will happen at startup, even when a PoE HAT is not attached. Setting this option to 1 disables control of a PoE HAT fan through I2C (on pins ID_SD & ID_SC). If you are not intending to use a PoE HAT doing this is useful if you need to minimise boot time.
+By default, a probe on the I2C bus will happen at startup, even when a PoE HAT is not attached. Setting this option to 1 disables control of a PoE HAT fan through I2C (on pins ID_SD & ID_SC). If you are not intending to use a PoE HAT, this is a helpful way to minimise boot time.
=== `disable_splash`
@@ -66,7 +87,7 @@ If `disable_splash` is set to `1`, the rainbow splash screen will not be shown o
=== `enable_uart`
-`enable_uart=1` (in conjunction with `console=serial0` in `cmdline.txt`) requests that the kernel creates a serial console, accessible using GPIOs 14 and 15 (pins 8 and 10 on the 40-pin header). Editing `cmdline.txt` to remove the line `quiet` enables boot messages from the kernel to also appear there. See also `uart_2ndstage`.
+`enable_uart=1` (in conjunction with `console=serial0,115200` in `cmdline.txt`) requests that the kernel creates a serial console, accessible using GPIOs 14 and 15 (pins 8 and 10 on the 40-pin header). Editing `cmdline.txt` to remove the line `quiet` enables boot messages from the kernel to also appear there. See also `uart_2ndstage`.
=== `force_eeprom_read`
@@ -75,30 +96,193 @@ Set this option to `0` to prevent the firmware from trying to read an I2C HAT EE
[[os_prefix]]
=== `os_prefix`
-`os_prefix` is an optional setting that allows you to choose between multiple versions of the kernel and Device Tree files installed on the same card. Any value in `os_prefix` is prepended to (stuck in front of) the name of any operating system files loaded by the firmware, where "operating system files" is defined to mean kernels, initramfs, cmdline.txt, .dtbs and overlays. The prefix would commonly be a directory name, but it could also be part of the filename such as "test-". For this reason, directory prefixes must include the trailing `/` character.
+`os_prefix` is an optional setting that allows you to choose between multiple versions of the kernel and Device Tree files installed on the same card. Any value in `os_prefix` is prepended to the name of any operating system files loaded by the firmware, where "operating system files" is defined to mean kernels, `initramfs`, `cmdline.txt`, `.dtbs` and overlays. The prefix would commonly be a directory name, but it could also be part of the filename such as "test-". For this reason, directory prefixes must include the trailing `/` character.
In an attempt to reduce the chance of a non-bootable system, the firmware first tests the supplied prefix value for viability - unless the expected kernel and .dtb can be found at the new location/name, the prefix is ignored (set to ""). A special case of this viability test is applied to overlays, which will only be loaded from `+${os_prefix}${overlay_prefix}+` (where the default value of <> is "overlays/") if `+${os_prefix}${overlay_prefix}README+` exists, otherwise it ignores `os_prefix` and treats overlays as shared.
-(The reason the firmware checks for the existence of key files rather than directories when checking prefixes is twofold - the prefix may not be a directory, and not all boot methods support testing for the existence of a directory.)
+(The reason the firmware checks for the existence of key files rather than directories when checking prefixes is twofold: the prefix may not be a directory, and not all boot methods support testing for the existence of a directory.)
NOTE: Any user-specified OS file can bypass all prefixes by using an absolute path (with respect to the boot partition) - just start the file path with a `/`, e.g. `kernel=/my_common_kernel.img`.
See also <> and xref:legacy_config_txt.adoc#upstream_kernel[`upstream_kernel`].
-=== `otg_mode` (Raspberry Pi 4 Only)
+=== `otg_mode` (Raspberry Pi 4 only)
USB On-The-Go (often abbreviated to OTG) is a feature that allows supporting USB devices with an appropriate OTG cable to configure themselves as USB hosts. On older Raspberry Pis, a single USB 2 controller was used in both USB host and device mode.
-Raspberry Pi 4B and Raspberry Pi 400 (not CM4 or CM4IO) add a high performance USB 3 controller, attached via PCIe, to drive the main USB ports. The legacy USB 2 controller is still available on the USB-C power connector for use as a device (`otg_mode=0`, the default).
+Flagship models since Raspberry Pi 4B and Keyboard models since Pi 400 add a high-performance USB 3 controller, attached via PCIe, to drive the main USB ports. The legacy USB 2 controller is still available on the USB-C power connector for use as a device (`otg_mode=0`, the default). Compute Modules before CM5 do not include this high-performance USB 3 controller.
+
+`otg_mode=1` requests that a more capable XHCI USB 2 controller is used as an alternative host controller on that USB-C connector.
-`otg_mode=1` requests that a more capable XHCI USB 2 controller is used as another host controller on that USB-C connector.
+NOTE: By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting on Compute Module 4.
-NOTE: Because CM4 and CM4IO don't include the external USB 3 controller, Raspberry Pi OS images set `otg_mode=1` on CM4 for better performance.
[[overlay_prefix]]
=== `overlay_prefix`
-Specifies a subdirectory/prefix from which to load overlays - defaults to `overlays/` (note the trailing `/`). If used in conjunction with <>, the `os_prefix` comes before the `overlay_prefix`, i.e. `dtoverlay=disable-bt` will attempt to load `+${os_prefix}${overlay_prefix}disable-bt.dtbo+`.
+Specifies a subdirectory/prefix from which to load overlays, and defaults to `overlays/` (note the trailing `/`). If used in conjunction with <>, the `os_prefix` comes before the `overlay_prefix`, i.e. `dtoverlay=disable-bt` will attempt to load `+${os_prefix}${overlay_prefix}disable-bt.dtbo+`.
NOTE: Unless `+${os_prefix}${overlay_prefix}README+` exists, overlays are shared with the main OS (i.e. `os_prefix` is ignored).
+=== Configuration Properties
+
+Raspberry Pi 5 requires a `config.txt` file to be present to indicate that the partition is bootable.
+
+[[boot_ramdisk]]
+==== `boot_ramdisk`
+
+If this property is set to `1` then the bootloader will attempt load a ramdisk file called `boot.img` containing the xref:configuration.adoc#boot-folder-contents[boot filesystem]. Subsequent files (e.g. `start4.elf`) are read from the ramdisk instead of the original boot file system.
+
+The primary purpose of `boot_ramdisk` is to support `secure-boot`, however, unsigned `boot.img` files can also be useful to Network Boot or `RPIBOOT` configurations.
+
+* The maximum size for a ramdisk file is 96MB.
+* `boot.img` files are raw disk `.img` files. The recommended format is a plain FAT32 partition with no MBR.
+* The memory for the ramdisk filesystem is released before the operating system is started.
+* If xref:raspberry-pi.adoc#fail-safe-os-updates-tryboot[TRYBOOT] is selected then the bootloader will search for `tryboot.img` instead of `boot.img`.
+* See also xref:config_txt.adoc#autoboot-txt[autoboot.txt].
+
+For more information about `secure-boot` and creating `boot.img` files please see https://github.com/raspberrypi/usbboot/blob/master/Readme.md[USBBOOT].
+
+Default: `0`
+
+[[boot_load_flags]]
+==== `boot_load_flags`
+
+Experimental property for custom firmware (bare metal).
+
+Bit 0 (0x1) indicates that the .elf file is custom firmware. This disables any compatibility checks (e.g. is USB MSD boot supported) and resets PCIe before starting the executable.
+
+Not relevant on Raspberry Pi 5 because there is no `start.elf` file.
+
+Default: `0x0`
+
+[[enable_rp1_uart]]
+==== `enable_rp1_uart`
+
+When set to `1`, firmware initialises RP1 UART0 to 115200bps and doesn't reset RP1 before starting the OS (separately configurable using `pciex4_reset=1`).
+This makes it easier to get UART output on the 40-pin header in early boot-code, for instance during bare-metal debug.
+
+Default: `0x0`
+
+[[pciex4_reset]]
+==== `pciex4_reset`
+
+Raspberry Pi 5 only.
+
+By default, the PCIe x4 controller used by `RP1` is reset before starting the operating system. If this parameter is set to `0` then the reset is disabled allowing operating system or bare metal code to inherit the PCIe configuration setup from the bootloader.
+
+Default: `1`
+
+[[uart_2ndstage]]
+==== `uart_2ndstage`
+
+If `uart_2ndstage` is `1` then enable debug logging to the UART. This option also automatically enables UART logging in `start.elf`. This is also described on the xref:config_txt.adoc#boot-options[Boot options] page.
+
+The `BOOT_UART` property also enables bootloader UART logging but does not enable UART logging in `start.elf` unless `uart_2ndstage=1` is also set.
+
+Default: `0`
+
+[[erase_eeprom]]
+==== `erase_eeprom`
+
+If `erase_eeprom` is set to `1` then `recovery.bin` will erase the entire SPI EEPROM instead of flashing the bootloader image. This property has no effect during a normal boot.
+
+Default: `0`
+
+[[eeprom_write_protect]]
+==== `eeprom_write_protect`
+
+Configures the EEPROM `write status register`. This can be set either to mark the entire EEPROM as write-protected, or to clear write-protection.
+
+This option must be used in conjunction with the EEPROM `/WP` pin which controls updates to the EEPROM `Write Status Register`. Pulling `/WP` low (CM4 `EEPROM_nWP` or on a Raspberry Pi 4 `TP5`) does NOT write-protect the EEPROM unless the `Write Status Register` has also been configured.
+
+See the https://www.winbond.com/resource-files/w25x40cl_f%2020140325.pdf[Winbond W25x40cl] or https://www.winbond.com/hq/product/code-storage-flash-memory/serial-nor-flash/?__locale=en&partNo=W25Q16JV[Winbond W25Q16JV] datasheets for further details.
+
+`eeprom_write_protect` settings in `config.txt` for `recovery.bin`.
+
+|===
+| Value | Description
+
+| 1
+| Configures the write protect regions to cover the entire EEPROM.
+
+| 0
+| Clears the write protect regions.
+
+| -1
+| Do nothing.
+|===
+
+NOTE: `flashrom` does not support clearing of the write-protect regions and will fail to update the EEPROM if write-protect regions are defined.
+
+On Raspberry Pi 5 `/WP` is pulled low by default and consequently write-protect is enabled as soon as the `Write Status Register` is configured. To clear write-protect pull `/WP` high by connecting `TP14` and `TP1`.
+
+Default: `-1`
+
+[[os_check]]
+==== `os_check`
+
+On Raspberry Pi 5 the firmware automatically checks for a compatible Device Tree file before attempting to boot from the current partition. Otherwise, older non-compatible kernels would be loaded and then hang.
+To disable this check (e.g. for bare-metal development), set `os_check=0` in config.txt
+
+Default: `1`
+
+[[bootloader_update]]
+==== `bootloader_update`
+
+This option may be set to 0 to block self-update without requiring the EEPROM configuration to be updated. This is sometimes useful when updating multiple Raspberry Pis via network boot because this option can be controlled per Raspberry Pi (e.g. via a serial number filter in `config.txt`).
+
+Default: `1`
+
+=== Secure Boot configuration properties
+
+[.whitepaper, title="How to use Raspberry Pi Secure Boot", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003466-WP/Boot-Security-Howto.pdf]
+****
+This whitepaper describes how to implement secure boot on devices based on Raspberry Pi 4. For an overview of our approach to implementing secure boot implementation, please see the https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-004651-WP/Raspberry-Pi-4-Boot-Security.pdf[Raspberry Pi 4 Boot Security] whitepaper. The secure boot system is intended for use with `buildroot`-based OS images; using it with Raspberry Pi OS is not recommended or supported.
+****
+
+The following `config.txt` properties are used to program the `secure-boot` OTP settings. These changes are irreversible and can only be programmed via `RPIBOOT` when flashing the bootloader EEPROM image. This ensures that `secure-boot` cannot be set remotely or by accidentally inserting a stale SD card image.
+
+For more information about enabling `secure-boot` please see the https://github.com/raspberrypi/usbboot/blob/master/Readme.md#secure-boot[Secure Boot readme] and the https://github.com/raspberrypi/usbboot/blob/master/secure-boot-example/README.md[Secure Boot tutorial] in the https://github.com/raspberrypi/usbboot[USBBOOT] repo.
+
+[[program_pubkey]]
+==== `program_pubkey`
+
+If this property is set to `1` then `recovery.bin` will write the hash of the public key in the EEPROM image to OTP. Once set, the bootloader will reject EEPROM images signed with different RSA keys or unsigned images.
+
+Default: `0`
+
+[[revoke_devkey]]
+==== `revoke_devkey`
+
+If this property is set to `1` then `recovery.bin` will write a value to OTP that prevents the ROM from loading old versions of the second stage bootloader which do not support `secure-boot`. This prevents `secure-boot` from being turned off by reverting to an older release of the bootloader.
+
+Default: `0`
+
+[[program_rpiboot_gpio]]
+==== `program_rpiboot_gpio`
+
+Compute Modules have a dedicated `nRPIBOOT` jumper to select `RPIBOOT` mode. Flagship and Keyboard Raspberry Pi devices with EEPROM lack a dedicated `nRPIBOOT` jumper. To select `RPIBOOT` mode on Flagship and Keyboard devices, pull one of the following GPIO pins low:
+
+* `2`
+* `4`
+* `5`
+* `6`
+* `7`
+* `8`
+
+This property does not depend on `secure-boot`. However, you should verify that this GPIO configuration does not conflict with any HATs which might pull the GPIO low during boot.
+
+For safety, this property can _only_ be programmed via `RPIBOOT`. As a result, you must first clear the bootloader EEPROM using `erase_eeprom`. This causes the ROM to failover to `RPIBOOT` mode, which then allows this option to be set.
+
+On BCM2712, you can alternatively force `RPIBOOT` mode by holding down the power button while simultaneously connecting a USB-C power supply.
+
+Default: `{nbsp}`
+
+[[program_jtag_lock]]
+==== `program_jtag_lock`
+
+If this property is set to `1` then `recovery.bin` will program an OTP value that prevents VideoCore JTAG from being used. This option requires that `program_pubkey` and `revoke_devkey` are also set. This option can prevent failure analysis, and should only be set after the device has been fully tested.
+
+Default: `0`
+
diff --git a/documentation/asciidoc/computers/config_txt/camera.adoc b/documentation/asciidoc/computers/config_txt/camera.adoc
index 3c8d3601bb..a3caa01349 100644
--- a/documentation/asciidoc/computers/config_txt/camera.adoc
+++ b/documentation/asciidoc/computers/config_txt/camera.adoc
@@ -1,9 +1,9 @@
-== Camera Settings
+== Camera settings
=== `disable_camera_led`
-Setting `disable_camera_led` to `1` prevents the red camera LED from turning on when recording video or taking a still picture. This is useful for preventing reflections when the camera is facing a window, for example.
+Setting `disable_camera_led` to `1` prevents the red camera LED from turning on when recording video or taking a still picture. This is useful for preventing reflections, for example when the camera is facing a window.
=== `awb_auto_is_greyworld`
-Setting `awb_auto_is_greyworld` to `1` allows libraries or applications that do not support the greyworld option internally to capture valid images and videos with NoIR cameras. It switches "auto" awb mode to use the "greyworld" algorithm. This should only be needed for NoIR cameras, or when the High Quality camera has had its xref:../accessories/camera.adoc#filter-removal[IR filter removed].
+Setting `awb_auto_is_greyworld` to `1` allows libraries or applications that do not support the greyworld option internally to capture valid images and videos with NoIR cameras. It switches auto awb mode to use the greyworld algorithm. This should only be needed for NoIR cameras, or when the High Quality camera has had its xref:../accessories/camera.adoc#filter-removal[IR filter removed].
diff --git a/documentation/asciidoc/computers/config_txt/codeclicence.adoc b/documentation/asciidoc/computers/config_txt/codeclicence.adoc
index 3b5a28490d..688591a12f 100644
--- a/documentation/asciidoc/computers/config_txt/codeclicence.adoc
+++ b/documentation/asciidoc/computers/config_txt/codeclicence.adoc
@@ -1,8 +1,10 @@
-== Licence Key and Codec Options
+== Licence key and codec options
Hardware decoding of additional codecs on the Raspberry Pi 3 and earlier models can be enabled by https://codecs.raspberrypi.com/license-keys/[purchasing a licence] that is locked to the CPU serial number of your Raspberry Pi.
-On the Raspberry Pi 4, the hardware codecs for MPEG2 or VC1 are permanently disabled and cannot be enabled even with a licence key; on the Raspberry Pi 4, thanks to its increased processing power compared to earlier models, MPEG2 and VC1 can be decoded in software via applications such as VLC. Therefore, a hardware codec licence key is not needed if you're using a Raspberry Pi 4.
+The Raspberry Pi 4 has permanently disabled hardware decoders for MPEG2 and VC1. These codecs cannot be enabled, so a hardware codec licence key is not needed. Software decoding of MPEG2 and VC1 files performs well enough for typical use cases.
+
+The Raspberry Pi 5 has H.265 (HEVC) hardware decoding. This decoding is enabled by default, so a hardware codec licence key is not needed.
=== `decode_MPG2`
diff --git a/documentation/asciidoc/computers/config_txt/common.adoc b/documentation/asciidoc/computers/config_txt/common.adoc
index 91122ba829..7f4f89708e 100644
--- a/documentation/asciidoc/computers/config_txt/common.adoc
+++ b/documentation/asciidoc/computers/config_txt/common.adoc
@@ -1,43 +1,59 @@
-== Common Options
+== Common options
-=== Common Display Options
+=== Common display options
-==== `hdmi_enable_4kp60` (Raspberry Pi 4 Only)
+==== `hdmi_enable_4kp60`
-By default, when connected to a 4K monitor, the Raspberry Pi 4B, 400 and CM4 will select a 30Hz refresh rate. Use this option to allow selection of 60Hz refresh rates.
+NOTE: This option applies only to Raspberry Pi 4, Compute Module 4, Compute Module 4S, and Pi 400.
-IMPORTANT: It is not possible to output 4Kp60 on both micro HDMI ports simultaneously.
+By default, when connected to a 4K monitor, certain models select a 30Hz refresh rate. Use this option to allow selection of 60Hz refresh rates. Models impacted by this setting do _not_ support 4Kp60 output on both micro HDMI ports simultaneously. Enabling this setting increases power consumption and temperature.
-WARNING: Setting `hdmi_enable_4kp60` will increase power consumption and the temperature of your Raspberry Pi.
-
-=== Common Hardware Configuration Options
+=== Common hardware configuration options
==== `camera_auto_detect`
-With this setting enabled (which it is in Raspberry Pi OS), the firmware will automatically load overlays for CSI cameras that it recognises. Set `camera_auto_detect=0` to disable.
+By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting.
+
+When enabled, the firmware will automatically load overlays for recognised CSI cameras.
+
+To disable, set `camera_auto_detect=0` (or remove `camera_auto_detect=1`).
==== `display_auto_detect`
-With this setting enabled (which it is in Raspberry Pi OS), the firmware will automatically load overlays for DSI displays that it recognises. Set `display_auto_detect=0` to disable.
+By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting.
+
+When enabled, the firmware will automatically load overlays for recognised DSI displays.
+
+To disable, set `display_auto_detect=0` (or remove `display_auto_detect=1`).
==== `dtoverlay`
The `dtoverlay` option requests the firmware to load a named Device Tree overlay - a configuration file that can enable kernel support for built-in and external hardware. For example, `dtoverlay=vc4-kms-v3d` loads an overlay that enables the kernel graphics driver.
-As a special case, if called with no value - `dtoverlay=` - it marks the end of a list of overlay parameters. If used before any other `dtoverlay` or `dtparam` setting it prevents the loading of any HAT overlay.
+As a special case, if called with no value - `dtoverlay=` - the option marks the end of a list of overlay parameters. If used before any other `dtoverlay` or `dtparam` setting, it prevents the loading of any HAT overlay.
For more details, see xref:configuration.adoc#part3.1[DTBs, overlays and config.txt].
==== `dtparam`
-Device Tree configuration files for Raspberry Pis support a number of parameters for such things as enabling I2C and SPI interfaces. Many DT overlays are configurable via the use of parameters. Both types of parameters can be supplied using the `dtparam` setting. In addition, overlay parameters can be appended to the `dtoverlay` option, separated by commas, but beware the line length limit of 98 characters.
+Device Tree configuration files for Raspberry Pi devices support various parameters for such things as enabling I2C and SPI interfaces. Many DT overlays are configurable via the use of parameters. Both types of parameters can be supplied using the `dtparam` setting. In addition, overlay parameters can be appended to the `dtoverlay` option, separated by commas, but keep in mind the line length limit of 98 characters.
For more details, see xref:configuration.adoc#part3.1[DTBs, overlays and config.txt].
-==== `arm_boost` (Raspberry Pi 4 Only)
+==== `arm_boost`
+
+NOTE: This option applies only to later Raspberry Pi 4B revisions which include two-phase power delivery, and all revisions of Pi 400.
+
+By default, Raspberry Pi OS includes a line in `/boot/firmware/config.txt` that enables this setting on supported devices.
+
+Some Raspberry Pi devices have a second switch-mode power supply for the SoC voltage rail. When enabled, increases the default turbo-mode clock from 1.5GHz to 1.8GHz.
+
+To disable, set `arm_boost=0`.
+
+==== `power_force_3v3_pwm`
-All Raspberry Pi 400s and newer revisions of the Raspberry Pi 4B are equipped with a second switch-mode power supply for the SoC voltage rail, and this allows the default turbo-mode clock to be increased from 1.5GHz to 1.8GHz. This change is enabled by default in Raspberry Pi OS. Set `arm_boost=0` to disable.
+NOTE: This option applies only to Raspberry Pi 5, Compute Module 5, and Pi 500.
-==== `power_force_3v3_pwm` (Raspberry Pi 5 Only)
+Forces PWM on 3.3V output from the GPIO header or CSI connector.
-Forces PWM when using a 3V3 power supply supply. Set `power_force_3v3_pwm=0` to disable.
+To disable, set `power_force_3v3_pwm=0`.
diff --git a/documentation/asciidoc/computers/config_txt/conditional.adoc b/documentation/asciidoc/computers/config_txt/conditional.adoc
index 1fa83dd3e9..f33a3d3206 100644
--- a/documentation/asciidoc/computers/config_txt/conditional.adoc
+++ b/documentation/asciidoc/computers/config_txt/conditional.adoc
@@ -1,7 +1,7 @@
[[conditional-filters]]
-== Conditional Filters
+== Conditional filters
-When a single SD Card (or card image) is being used with one Raspberry Pi and one monitor, it is easy to set `config.txt` as required for that specific combination and keep it that way, amending it only when something changes.
+When a single SD card (or card image) is being used with one Raspberry Pi and one monitor, it is easy to set `config.txt` as required for that specific combination and keep it that way, amending it only when something changes.
However, if one Raspberry Pi is swapped between different monitors, or if the SD card (or card image) is being swapped between multiple boards, a single set of settings may no longer be sufficient. Conditional filters allow you to define certain sections of the config file to be used only in specific cases, allowing a single `config.txt` to create different configurations when read by different hardware.
@@ -9,9 +9,9 @@ However, if one Raspberry Pi is swapped between different monitors, or if the SD
The `[all]` filter is the most basic filter. It resets all previously set filters and allows any settings listed below it to be applied to all hardware. It is usually a good idea to add an `[all]` filter at the end of groups of filtered settings to avoid unintentionally combining filters (see below).
-=== Model Filters
+=== Model filters
-The conditional model filters are applied according to the following table.
+The conditional model filters apply according to the following table.
|===
| Filter | Applicable model(s)
@@ -32,17 +32,32 @@ The conditional model filters are applied according to the following table.
| Model 4B, Pi 400, Compute Module 4, Compute Module 4S
| `[pi5]`
-| Raspberry Pi 5
+| Raspberry Pi 5, Compute Module 5, Pi 500
| `[pi400]`
| Pi 400 (also sees `[pi4]` contents)
+| `[pi500]`
+| Pi 500 (also sees `[pi5]` contents)
+
+| `[cm1]`
+| Compute Module 1 (also sees `[pi1]` contents)
+
+| `[cm3]`
+| Compute Module 3 (also sees `[pi3]` contents)
+
+| `[cm3+]`
+| Compute Module 3+ (also sees `[pi3+]` contents)
+
| `[cm4]`
| Compute Module 4 (also sees `[pi4]` contents)
| `[cm4s]`
| Compute Module 4S (also sees `[pi4]` contents)
+| `[cm5]`
+| Compute Module 5 (also sees `[pi5]` contents)
+
| `[pi0]`
| Zero, Zero W, Zero 2 W
@@ -60,21 +75,67 @@ The conditional model filters are applied according to the following table.
These are particularly useful for defining different `kernel`, `initramfs`, and `cmdline` settings, as the Raspberry Pi 1 and Raspberry Pi 2 require different kernels. They can also be useful to define different overclocking settings, as the Raspberry Pi 1 and Raspberry Pi 2 have different default speeds. For example, to define separate `initramfs` images for each:
----
- [pi1]
- initramfs initrd.img-3.18.7+ followkernel
- [pi2]
- initramfs initrd.img-3.18.7-v7+ followkernel
- [all]
+[pi1]
+initramfs initrd.img-3.18.7+ followkernel
+[pi2]
+initramfs initrd.img-3.18.7-v7+ followkernel
+[all]
----
Remember to use the `[all]` filter at the end, so that any subsequent settings aren't limited to Raspberry Pi 2 hardware only.
-NOTE: Some models of Raspberry Pi (Zero W, Zero 2 W, Model 3B+, Pi 400, Compute Module 4 and Compute Module 4S) see the settings for multiple filters (as listed in the table above). This means that if you want a setting to apply only to (e.g.) a Model 4B without _also_ applying that setting to a Pi 400, then the setting in the `[pi4]` section would need to be reverted by an alternate setting in a following `[pi400]` section - the ordering of such sections is significant. Alternatively, you could use a `[board-type=0x11]` filter which has a one-to-one mapping to different hardware products.
+[NOTE]
+====
+Some models of Raspberry Pi, including Zero, Compute Module, and Keyboard models, read settings from multiple filters. To apply a setting to only one model:
+
+* apply the setting to the base model (e.g. `[pi4]`), then revert the setting for all models that read the base model's filters (e.g. `[pi400]`, `[cm4]`, `[cm4s]`)
+* use the `board-type` filter with a revision code to target a single model (e.g. `[board-type=0x11]`)
+====
=== The `[none]` filter
The `[none]` filter prevents any settings that follow from being applied to any hardware. Although there is nothing that you can't do without `[none]`, it can be a useful way to keep groups of unused settings in config.txt without having to comment out every line.
+=== The `[partition=N]` filter
+This `partition` filter can be be used to select alternate boot flows according to the requested partition number (`sudo reboot N`) or via direct usage of the `PM_RSTS` watchdog register.
+
+[source,ini]
+----
+# Bootloader EEPROM config.
+# If PM_RSTS is partition 62 then set bootloader properties to disable
+# SD high speed and show HDMI diagnostics
+# Boot from partition 2 with debug option.
+[partition=62]
+# Only high (>31) partition can be remapped.
+PARTITION=2
+SD_QUIRKS=0x1
+HDMI_DELAY=0
+----
+
+Example `config.txt` - (Currently Raspberry Pi 5 onwards)
+[source,ini]
+----
+# config.txt - If the original requested partition number in PM_RSTS was a
+# special number then use an alternate cmdline.txt
+[partition=62]
+cmdline=cmdline-recovery.txt
+----
+
+The raw value of the `PM_RSTS` register at bootup is available via `/proc/device-tree/chosen/bootloader/rsts` and the final partition number used for booting is available via `/proc/device-tree/chosen/bootloader/partition`. These are big-endian binary values.
+
+=== The `[boot_partition=N]` filter
+The `boot_partition` filter can be used to select alternate OS files (e.g. `cmdline.txt`) to be loaded, depending on which partition `config.txt` was loaded from after processing `autoboot.txt`. This is intended for use with an `A/B` boot-system with `autoboot.txt` where it is desirable to be able to have identical files installed to the boot partition for both the `A` and `B` images.
+
+Example `config.txt` - select the matching root filesystem for the `A/B` boot file-system.
+[source,ini]
+----
+[boot_partition=1]
+cmdline=cmdline_rootfs_a.txt
+
+[boot_partition=2]
+cmdline=cmdline_rootfs_b.txt
+----
+
=== The `[tryboot]` filter
This filter succeeds if the `tryboot` reboot flag was set.
@@ -85,31 +146,29 @@ It is intended for use in xref:config_txt.adoc#autoboot-txt[autoboot.txt] to sel
When switching between multiple monitors while using a single SD card in your Raspberry Pi, and where a blank config isn't sufficient to automatically select the desired resolution for each one, this allows specific settings to be chosen based on the monitors' EDID names.
-To view the "EDID name" of an attached monitor you need to follow a few steps. First run the following command to see which output-devices you have on your Raspberry Pi:
+To view the EDID name of an attached monitor, you need to follow a few steps. Run the following command to see which output devices you have on your Raspberry Pi:
-[source]
+[source,console]
----
-ls -1 /sys/class/drm/card?-HDMI-A-?/edid
+$ ls -1 /sys/class/drm/card?-HDMI-A-?/edid
----
On a Raspberry Pi 4, this will print something like:
-[source]
----
/sys/class/drm/card1-HDMI-A-1/edid
/sys/class/drm/card1-HDMI-A-2/edid
----
-You then need to run `edid-decode` against each of these filenames, e.g.
+You then need to run `edid-decode` against each of these filenames, for example:
-[source]
+[source,console]
----
-edid-decode /sys/class/drm/card1-HDMI-A-1/edid
+$ edid-decode /sys/class/drm/card1-HDMI-A-1/edid
----
-If there's no monitor connected to that particular output-device it'll tell you the EDID was empty, otherwise it'll give you *lots* of information about your monitor's capabilities. You need to look for the lines specifying the `Manufacturer` and the `Display Product Name`. The "EDID name" is then `-`, with any spaces in either string replaced by underscores. For example, if your `edid-decode` output included:
+If there's no monitor connected to that particular output device, it'll tell you the EDID was empty; otherwise it will serve you *lots* of information about your monitor's capabilities. You need to look for the lines specifying the `Manufacturer` and the `Display Product Name`. The "EDID name" is then `-`, with any spaces in either string replaced by underscores. For example, if your `edid-decode` output included:
-[source]
----
....
Vendor & Product Identification:
@@ -119,82 +178,92 @@ If there's no monitor connected to that particular output-device it'll tell you
....
----
-then the EDID name for this monitor would be `DEL-DELL_U2422H`.
+The EDID name for this monitor would be `DEL-DELL_U2422H`.
You can then use this as a conditional-filter to specify settings that only apply when this particular monitor is connected:
-[source]
+[source,ini]
----
[EDID=DEL-DELL_U2422H]
cmdline=cmdline_U2422H.txt
[all]
----
-Note that these settings apply only at boot, so the monitor must be connected at boot time and the Raspberry Pi must be able to read its EDID information to find the correct name. Hotplugging a different monitor into the Raspberry Pi after boot will not select different settings.
+These settings apply only at boot. The monitor must be connected at boot time, and the Raspberry Pi must be able to read its EDID information to find the correct name. Hotplugging a different monitor into the Raspberry Pi after boot will not select different settings.
On the Raspberry Pi 4, if both HDMI ports are in use, then the EDID filter will be checked against both of them, and configuration from all matching conditional filters will be applied.
NOTE: This setting is not available on Raspberry Pi 5.
-=== The Serial Number Filter
+=== The serial number filter
-Sometimes settings should only be applied to a single specific Raspberry Pi, even if you swap the SD card to a different one. Examples include licence keys and overclocking settings (although the licence keys already support SD card swapping in a different way). You can also use this to select different display settings, even if the EDID identification above is not possible, provided that you don't swap monitors between your Raspberry Pis. For example, if your monitor doesn't supply a usable EDID name, or if you are using composite output (for which EDID cannot be read).
+Sometimes settings should only be applied to a single specific Raspberry Pi, even if you swap the SD card to a different one. Examples include licence keys and overclocking settings (although the licence keys already support SD card swapping in a different way). You can also use this to select different display settings, even if the EDID identification above is not possible, provided that you don't swap monitors between your Raspberry Pis. For example, if your monitor doesn't supply a usable EDID name, or if you are using composite output (from which EDID cannot be read).
To view the serial number of your Raspberry Pi, run the following command:
-[source]
+[source,console]
----
-cat /proc/cpuinfo
+$ cat /proc/cpuinfo
----
-A 16-digit hex value will be displayed near the bottom of the output -- your Raspberry Pi's serial number is the last eight hex-digits. For example, if you see:
+A 16-digit hex value will be displayed near the bottom of the output. Your Raspberry Pi's serial number is the last eight hex-digits. For example, if you see:
-[source]
----
Serial : 0000000012345678
----
-then you can define settings that will only be applied to this specific Raspberry Pi:
+The serial number is `12345678`.
+
+NOTE: On some Raspberry Pi models, the first 8 hex-digits contain values other than `0`. Even in this case, only use the last eight hex-digits as the serial number.
-[source]
+You can define settings that will only be applied to this specific Raspberry Pi:
+
+[source,ini]
----
[0x12345678]
-# settings here are applied only to the Raspberry Pi with this serial
+# settings here apply only to the Raspberry Pi with this serial
+
[all]
-# settings here are applied to all hardware
+# settings here apply to all hardware
+
----
-=== The GPIO Filter
+=== The GPIO filter
-You can also filter depending on the state of a GPIO. For example
+You can also filter depending on the state of a GPIO. For example:
-[source]
+[source,ini]
----
[gpio4=1]
-# Settings here are applied if GPIO 4 is high
+# Settings here apply if GPIO 4 is high
[gpio2=0]
-# Settings here are applied if GPIO 2 is low
+# Settings here apply if GPIO 2 is low
[all]
-# settings here are applied to all hardware
+# settings here apply to all hardware
+
----
-=== Combining Conditional Filters
+=== Combine conditional filters
Filters of the same type replace each other, so `[pi2]` overrides `[pi1]`, because it is not possible for both to be true at once.
-Filters of different types can be combined simply by listing them one after the other, for example:
+Filters of different types can be combined by listing them one after the other, for example:
-[source]
+[source,ini]
----
- # settings here are applied to all hardware
- [EDID=VSC-TD2220]
- # settings here are applied only if monitor VSC-TD2220 is connected
- [pi2]
- # settings here are applied only if monitor VSC-TD2220 is connected *and* on a Raspberry Pi 2
- [all]
- # settings here are applied to all hardware
+# settings here apply to all hardware
+
+[EDID=VSC-TD2220]
+# settings here apply only if monitor VSC-TD2220 is connected
+
+[pi2]
+# settings here apply only if monitor VSC-TD2220 is connected *and* on a Raspberry Pi 2
+
+[all]
+# settings here apply to all hardware
+
----
Use the `[all]` filter to reset all previous filters and avoid unintentionally combining different filter types.
diff --git a/documentation/asciidoc/computers/config_txt/gpio.adoc b/documentation/asciidoc/computers/config_txt/gpio.adoc
index af35509019..2508cbd06a 100644
--- a/documentation/asciidoc/computers/config_txt/gpio.adoc
+++ b/documentation/asciidoc/computers/config_txt/gpio.adoc
@@ -1,10 +1,9 @@
-== GPIO Control
+== GPIO control
=== `gpio`
-The `gpio` directive allows GPIO pins to be set to specific modes and values at boot time in a way that would
-previously have needed a custom `dt-blob.bin` file. Each line applies the same settings (or at least makes the same
-changes) to a set of pins, either a single pin (`3`), a range of pins (`3-4`), or a comma-separated list of either (`3-4,6,8`).
+The `gpio` directive allows GPIO pins to be set to specific modes and values at boot time in a way that would previously have needed a custom `dt-blob.bin` file. Each line applies the same settings (or at least makes the same changes) to a set of pins, addressing either a single pin (`3`), a range of pins (`3-4`), or a comma-separated list of either (`3-4,6,8`).
+
The pin set is followed by an `=` and one or more comma-separated attributes from this list:
* `ip` - Input
@@ -16,10 +15,11 @@ The pin set is followed by an `=` and one or more comma-separated attributes fro
* `pd` - Pull down
* `pn/np` - No pull
-`gpio` settings are applied in order, so those appearing later override those appearing earlier.
+`gpio` settings apply in order, so those appearing later override those appearing earlier.
Examples:
+[source,ini]
----
# Select Alt2 for GPIO pins 0 to 27 (for DPI24)
gpio=0-27=a2
@@ -34,13 +34,9 @@ gpio=18,20=pu
gpio=17-21=ip
----
-The `gpio` directive respects the "[...]" conditional filters in `config.txt`, so it is possible to use different settings
-based on the model, serial number, and EDID.
+The `gpio` directive respects the "[...]" conditional filters in `config.txt`, so it is possible to use different settings based on the model, serial number, and EDID.
-GPIO changes made through this mechanism do not have any direct effect on the kernel -- they don't cause GPIO pins to
-be exported to the sysfs interface, and they can be overridden by pinctrl entries in the Device Tree as well as
-utilities like `pinctrl`.
+GPIO changes made through this mechanism do not have any direct effect on the kernel. They don't cause GPIO pins to be exported to the `sysfs` interface, and they can be overridden by `pinctrl` entries in the Device Tree as well as utilities like `pinctrl`.
-Note also that there is a delay of a few seconds between power being applied and the changes taking effect -- longer
-if booting over the network or from a USB mass storage device.
+Note also that there is a delay of a few seconds between power being applied and the changes taking effect - longer if booting over the network or from a USB mass storage device.
diff --git a/documentation/asciidoc/computers/config_txt/memory.adoc b/documentation/asciidoc/computers/config_txt/memory.adoc
index fb554b8939..8c6d907310 100644
--- a/documentation/asciidoc/computers/config_txt/memory.adoc
+++ b/documentation/asciidoc/computers/config_txt/memory.adoc
@@ -1,9 +1,10 @@
-== Memory Options
+== Memory options
=== `total_mem`
This parameter can be used to force a Raspberry Pi to limit its memory capacity: specify the total amount of RAM, in megabytes, you wish the Raspberry Pi to use. For example, to make a 4GB Raspberry Pi 4B behave as though it were a 1GB model, use the following:
+[source,ini]
----
total_mem=1024
----
diff --git a/documentation/asciidoc/computers/config_txt/overclocking.adoc b/documentation/asciidoc/computers/config_txt/overclocking.adoc
index 7708fa63c3..b76a8ac8a5 100644
--- a/documentation/asciidoc/computers/config_txt/overclocking.adoc
+++ b/documentation/asciidoc/computers/config_txt/overclocking.adoc
@@ -1,15 +1,16 @@
-== Overclocking Options
+== Overclocking options
-The kernel has a https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html[CPUFreq] driver with the "powersave" governor enabled by default, switched to "ondemand" during boot, when xref:configuration.adoc#raspi-config[raspi-config] is installed. With "ondemand" governor, CPU frequency will vary with processor load. You can adjust the minimum values with the `*_min` config options or disable dynamic clocking by applying a static scaling governor ("powersave" or "performance") or with `force_turbo=1`.
+The kernel has a https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html[CPUFreq] driver with the powersave governor enabled by default, switched to ondemand during boot, when xref:configuration.adoc#raspi-config[raspi-config] is installed. With the ondemand governor, CPU frequency will vary with processor load. You can adjust the minimum values with the `*_min` config options, or disable dynamic clocking by applying a static scaling governor (powersave or performance) or with `force_turbo=1`.
-Overclocking and overvoltage will be disabled at runtime when the SoC reaches `temp_limit` (see below), which defaults to 85°C, in order to cool down the SoC. You should not hit this limit with Raspberry Pi 1 and Raspberry Pi 2, but you are more likely to with Raspberry Pi 3 and newer Overclocking and overvoltage are also disabled when an undervoltage situation is detected.
+Overclocking and overvoltage will be disabled at runtime when the SoC reaches `temp_limit` (see below), which defaults to 85°C, in order to cool down the SoC. You should not hit this limit with Raspberry Pi 1 and Raspberry Pi 2, but you are more likely to with Raspberry Pi 3 and newer. Overclocking and overvoltage are also disabled when an undervoltage situation is detected.
NOTE: For more information xref:raspberry-pi.adoc#frequency-management-and-thermal-control[see the section on frequency management and thermal control].
-WARNING: Setting any overclocking parameters to values other than those used by xref:configuration.adoc#overclock[raspi-config] may set a permanent bit within the SoC, making it possible to detect that your Raspberry Pi has been overclocked. The specific circumstances where the overclock bit is set are if `force_turbo` is set to `1` and any of the `over_voltage_*` options are set to a value > `0`. See the https://www.raspberrypi.com/news/introducing-turbo-mode-up-to-50-more-performance-for-free/[blog post on Turbo Mode] for more information.
+WARNING: Setting any overclocking parameters to values other than those used by xref:configuration.adoc#overclock[`raspi-config`] may set a permanent bit within the SoC. This makes it possible to detect that your Raspberry Pi was once overclocked. The overclock bit sets when `force_turbo` is set to `1` and any of the `over_voltage_*` options are set to a value of more than `0`. See the https://www.raspberrypi.com/news/introducing-turbo-mode-up-to-50-more-performance-for-free/[blog post on Turbo mode] for more information.
=== Overclocking
+[cols="1m,3"]
|===
| Option | Description
@@ -20,19 +21,19 @@ WARNING: Setting any overclocking parameters to values other than those used by
| Increases `arm_freq` to the highest supported frequency for the board-type and firmware. Set to `1` to enable.
| gpu_freq
-| Sets `core_freq`, `h264_freq`, `isp_freq`, `v3d_freq` and `hevc_freq` together
+| Sets `core_freq`, `h264_freq`, `isp_freq`, `v3d_freq` and `hevc_freq` together.
| core_freq
-| Frequency of the GPU processor core in MHz, influences CPU performance because it drives the L2 cache and memory bus; the L2 cache benefits only Raspberry Pi Zero / Raspberry Pi Zero W / Raspberry Pi 1, there is a small benefit for SDRAM on Raspberry Pi 2 / Raspberry Pi 3. See section below for use on the Raspberry Pi 4.
+| Frequency of the GPU processor core in MHz. Influences CPU performance because it drives the L2 cache and memory bus; the L2 cache benefits only Raspberry Pi Zero/Raspberry Pi Zero W/Raspberry Pi 1; and there is a small benefit for SDRAM on Raspberry Pi 2 and Raspberry Pi 3. See section below for use on Raspberry Pi 4.
| h264_freq
-| Frequency of the hardware video block in MHz; individual override of the `gpu_freq` setting
+| Frequency of the hardware video block in MHz; individual override of the `gpu_freq` setting.
| isp_freq
-| Frequency of the image sensor pipeline block in MHz; individual override of the `gpu_freq` setting
+| Frequency of the image sensor pipeline block in MHz; individual override of the `gpu_freq` setting.
| v3d_freq
-| Frequency of the 3D block in MHz; individual override of the `gpu_freq` setting. On Raspberry Pi 5 V3D is independent of `core_freq`, `isp_freq` and `hevc_freq`
+| Frequency of the 3D block in MHz; individual override of the `gpu_freq` setting. On Raspberry Pi 5, V3D is independent of `core_freq`, `isp_freq` and `hevc_freq`.
| hevc_freq
| Frequency of the High Efficiency Video Codec block in MHz; individual override of the `gpu_freq` setting. Raspberry Pi 4 only.
@@ -41,28 +42,32 @@ WARNING: Setting any overclocking parameters to values other than those used by
| Frequency of the SDRAM in MHz. SDRAM overclocking on Raspberry Pi 4 or newer is not supported.
| over_voltage
-| CPU/GPU core upper voltage limit. The value should be in the range [-16,8] which equates to the range [0.95V,1.55V] ([0.8,1.4V] on Raspberry Pi 1) with 0.025V steps. In other words, specifying -16 will give 0.95V (0.8V on Raspberry Pi 1) as the maximum CPU/GPU core voltage, and specifying 8 will allow up to 1.55V (1.4V on Raspberry Pi 1). For defaults see table below. Values above 6 are only allowed when `force_turbo=1` is specified: this sets the warranty bit if `over_voltage_*` > `0` is also set.
+| CPU/GPU core upper voltage limit. The value should be in the range [-16,8] which equates to the range [0.95V,1.55V] ([0.8,1.4V] on Raspberry Pi 1) with 0.025V steps. In other words, specifying -16 will give 0.95V (0.8V on Raspberry Pi 1) as the maximum CPU/GPU core voltage, and specifying 8 will allow up to 1.55V (1.4V on Raspberry Pi 1). For defaults, see the table below. Values above 6 are only allowed when `force_turbo=1` is specified: this sets the warranty bit if `over_voltage_*` > `0` is also set.
| over_voltage_sdram
| Sets `over_voltage_sdram_c`, `over_voltage_sdram_i`, and `over_voltage_sdram_p` together.
| over_voltage_sdram_c
-| SDRAM controller voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or newer.
+| SDRAM controller voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices.
| over_voltage_sdram_i
-| SDRAM I/O voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or newer.
+| SDRAM I/O voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices.
| over_voltage_sdram_p
-| SDRAM phy voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or newer.
+| SDRAM phy voltage adjustment. [-16,8] equates to [0.8V,1.4V] with 0.025V steps. Not supported on Raspberry Pi 4 or later devices.
| force_turbo
| Forces turbo mode frequencies even when the ARM cores are not busy. Enabling this may set the warranty bit if `over_voltage_*` is also set.
| initial_turbo
-| Enables https://forums.raspberrypi.com/viewtopic.php?f=29&t=6201&start=425#p180099[turbo mode from boot] for the given value in seconds, or until cpufreq sets a frequency. The maximum value is `60`.
+| Enables https://forums.raspberrypi.com/viewtopic.php?f=29&t=6201&start=425#p180099[turbo mode from boot] for the given value in seconds, or until `cpufreq` sets a frequency. The maximum value is `60`. The November 2024 firmware update made the following changes:
+
+* changed the default from `0` to `60` to reduce boot time
+* switched the kernel CPU performance governor from `powersave` to `ondemand`
+
| arm_freq_min
-| Minimum value of `arm_freq` used for dynamic frequency clocking. Note that reducing this value below the default does not result in any significant power savings and is not currently supported.
+| Minimum value of `arm_freq` used for dynamic frequency clocking. Note that reducing this value below the default does not result in any significant power savings, and is not currently supported.
| core_freq_min
| Minimum value of `core_freq` used for dynamic frequency clocking.
@@ -96,11 +101,15 @@ WARNING: Setting any overclocking parameters to values other than those used by
| temp_soft_limit
| *3A+/3B+ only*. CPU speed throttle control. This sets the temperature at which the CPU clock speed throttling system activates. At this temperature, the clock speed is reduced from 1400MHz to 1200MHz. Defaults to `60`, can be raised to a maximum of `70`, but this may cause instability.
+
+| core_freq_fixed
+| Setting to 1 disables active scaling of the core clock frequency and ensures that any peripherals that use the core clock will maintain a consistent speed. The fixed clock speed is the higher/turbo frequency for the platform in use. Use this in preference to setting specific core_clock frequencies as it provides portability of config files between platforms.
+
|===
This table gives the default values for the options on various Raspberry Pi models, all frequencies are stated in MHz.
-[cols=",^,^,^,^,^,^,^,^,^,^"]
+[cols="m,^,^,^,^,^,^,^,^,^,^"]
|===
| Option | Pi 0/W | Pi1 | Pi2 | Pi3 | Pi3A+/Pi3B+ | CM4 & Pi4B <= R1.3 | Pi4B R1.4 | Pi 400 | Pi Zero 2 W | Pi 5
@@ -111,7 +120,7 @@ This table gives the default values for the options on various Raspberry Pi mode
| 1200
| 1400
| 1500
-| 1500 or 1800 if arm_boost=1
+| 1500 or 1800 if `arm_boost`=1
| 1800
| 1000
| 2400
@@ -273,9 +282,9 @@ This table gives the default values for the options on various Raspberry Pi mode
| 4267
|===
-This table gives defaults for options that are the same across all models.
+This table gives defaults for options which are the same across all models.
-[cols=",^"]
+[cols="m,^"]
|===
| Option | Default
@@ -307,7 +316,7 @@ This table gives defaults for options that are the same across all models.
The firmware uses Adaptive Voltage Scaling (AVS) to determine the optimum CPU/GPU core voltage in the range defined by `over_voltage` and `over_voltage_min`.
[discrete]
-===== Specific to Raspberry Pi 4, Raspberry Pi 400 and CM4
+==== Specific to Raspberry Pi 4, Raspberry Pi 400 and CM4
The minimum core frequency when the system is idle must be fast enough to support the highest pixel clock (ignoring blanking) of the display(s). Consequently, `core_freq` will be boosted above 500 MHz if the display mode is 4Kp60.
@@ -317,54 +326,72 @@ The minimum core frequency when the system is idle must be fast enough to suppor
| Default
| 500
-| hdmi_enable_4kp60
+| `hdmi_enable_4kp60`
| 550
|===
-NOTE: Raspberry Pi 5 supports dual-4Kp60 displays with the idle-clock settings so `hdmi_enable_4kp60` is redundant.
+NOTE: There is no need to use `hdmi_enable_4kp60` on Flagship models since Raspberry Pi 5, Compute Modules since CM5, and Keyboard models since Pi 500; they support dual-4Kp60 displays by default.
* Overclocking requires the latest firmware release.
* The latest firmware automatically scales up the voltage if the system is overclocked. Manually setting `over_voltage` disables automatic voltage scaling for overclocking.
-* It is recommended when overclocking to use the individual frequency settings (`isp_freq`, `v3d_freq` etc) rather than `gpu_freq` because the maximum stable frequency will be different for ISP, V3D, HEVC etc.
-* The SDRAM frequency is not configurable on Raspberry Pi 4 or newer.
+* It is recommended when overclocking to use the individual frequency settings (`isp_freq`, `v3d_freq` etc) rather than `gpu_freq`, because the maximum stable frequency will be different for ISP, V3D, HEVC etc.
+* The SDRAM frequency is not configurable on Raspberry Pi 4 or later devices.
==== `force_turbo`
-By default (`force_turbo=0`) the "On Demand" CPU frequency driver will raise clocks to their maximum frequencies when the ARM cores are busy and will lower them to the minimum frequencies when the ARM cores are idle.
+By default (`force_turbo=0`) the on-demand CPU frequency driver will raise clocks to their maximum frequencies when the ARM cores are busy, and will lower them to the minimum frequencies when the ARM cores are idle.
`force_turbo=1` overrides this behaviour and forces maximum frequencies even when the ARM cores are not busy.
-=== Clocks Relationship
+=== Clocks relationship
==== Raspberry Pi 4
-The GPU core, CPU, SDRAM and GPU each have their own PLLs and https://forums.raspberrypi.com/viewtopic.php?f=29&t=6201&start=275#p168042[can have unrelated frequencies]. The h264, v3d and ISP blocks share a PLL.
+
+The GPU core, CPU, SDRAM and GPU each have their own PLLs and can have unrelated frequencies. The h264, v3d and ISP blocks share a PLL.
To view the Raspberry Pi's current frequency in KHz, type: `cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq`. Divide the result by 1000 to find the value in MHz. Note that this frequency is the kernel _requested_ frequency, and it is possible that any throttling (for example at high temperatures) may mean the CPU is actually running more slowly than reported. An instantaneous measurement of the actual ARM CPU frequency can be retrieved using the vcgencmd `vcgencmd measure_clock arm`. This is displayed in Hertz.
-=== Monitoring Core Temperature
+=== Monitoring core temperature
[.whitepaper, title="Cooling a Raspberry Pi device", subtitle="", link=https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-003608-WP/Cooling-a-Raspberry-Pi-device.pdf]
****
-This whitepaper goes through the reasons why your Raspberry Pi may get hot and why you might want to cool it back down, and gives various options on achieving that cooling process.
+This white paper goes through the reasons why your Raspberry Pi may get hot and why you might want to cool it back down, offering options on the cooling process.
****
-To view the Raspberry Pi's temperature, type `cat /sys/class/thermal/thermal_zone0/temp`. Divide the result by 1000 to find the value in degrees Celsius. Alternatively, there is a vcgencmd, `vcgencmd measure_temp` that interrogates the GPU directly for its temperature.
+To view the temperature of a Raspberry Pi, run the following command:
-Whilst hitting the temperature limit is not harmful to the SoC, it will cause CPU throttling. A heatsink can help to control the core temperature and therefore performance. This is especially useful if the Raspberry Pi is running inside a case. Airflow over the heatsink will make cooling more efficient.
+[source,console]
+----
+$ cat /sys/class/thermal/thermal_zone0/temp
+----
-When the core temperature is between 80'C and 85'C, the ARM cores will be throttled back. If the temperature exceeds 85'C, the ARM cores and the GPU will be throttled back.
+Divide the result by 1000 to find the value in degrees Celsius. Alternatively, you can use `vcgencmd measure_temp` to report the GPU temperature.
-For the Raspberry Pi 3 Model B+, the PCB technology has been changed to provide better heat dissipation and increased thermal mass. In addition, a soft temperature limit has been introduced, with the goal of maximising the time for which a device can "sprint" before reaching the hard limit at 85°C. When the soft limit is reached, the clock speed is reduced from 1.4GHz to 1.2GHz, and the operating voltage is reduced slightly. This reduces the rate of temperature increase: we trade a short period at 1.4GHz for a longer period at 1.2GHz. By default, the soft limit is 60°C, and this can be changed via the `temp_soft_limit` setting in config.txt.
+Hitting the temperature limit is not harmful to the SoC, but it will cause the CPU to throttle. A heat sink can help to control the core temperature, and therefore performance. This is especially useful if the Raspberry Pi is running inside a case. Airflow over the heat sink will make cooling more efficient.
-=== Monitoring Voltage
+When the core temperature is between 80°C and 85°C, the ARM cores will be throttled back. If the temperature exceeds 85°C, the ARM cores and the GPU will be throttled back.
+
+For the Raspberry Pi 3 Model B+, the PCB technology has been changed to provide better heat dissipation and increased thermal mass. In addition, a soft temperature limit has been introduced, with the goal of maximising the time for which a device can "sprint" before reaching the hard limit at 85°C. When the soft limit is reached, the clock speed is reduced from 1.4GHz to 1.2GHz, and the operating voltage is reduced slightly. This reduces the rate of temperature increase: we trade a short period at 1.4GHz for a longer period at 1.2GHz. By default, the soft limit is 60°C. This can be changed via the `temp_soft_limit` setting in `config.txt`.
+
+=== Monitoring voltage
It is essential to keep the supply voltage above 4.8V for reliable performance. Note that the voltage from some USB chargers/power supplies can fall as low as 4.2V. This is because they are usually designed to charge a 3.7V LiPo battery, not to supply 5V to a computer.
-To monitor the Raspberry Pi's PSU voltage, you will need to use a multimeter to measure between the VCC and GND pins on the GPIO. More information is available in xref:raspberry-pi.adoc#power-supply[power].
+To monitor the Raspberry Pi's PSU voltage, you will need to use a multimeter to measure between the VCC and GND pins on the GPIO. More information is available in the xref:raspberry-pi.adoc#power-supply[power] section of the documentation.
+
+If the voltage drops below 4.63V (±5%), the ARM cores and the GPU will be throttled back, and a message indicating the low voltage state will be added to the kernel log.
+
+The Raspberry Pi 5 PMIC has built in ADCs that allow the supply voltage to be measured. To view the current supply voltage, run the following command:
+
+[source,console]
+----
+$ vcgencmd pmic_read_adc EXT5V_V
+----
-If the voltage drops below 4.63V (+-5%), the ARM cores and the GPU will be throttled back, and a message indicating the low voltage state will be added to the kernel log.
+=== Overclocking problems
-The Raspberry Pi 5 `PMIC` has built in ADCs that allows the supply voltage to be measured. To do this run `vcgencmd pmic_read_adc EXT5V_V`
+Most overclocking issues show up immediately, when the device fails to boot. If your device fails to boot due to an overclocking configuration change, use the following steps to return your device to a bootable state:
-=== Overclocking Problems
+. Remove any clock frequency overrides from `config.txt`.
+. Increase the core voltage using `over_voltage_delta`.
+. Re-apply overclocking parameters, taking care to avoid the previous known-bad overclocking parameters.
-Most overclocking issues show up immediately with a failure to boot. If this occurs, hold down the `shift` key during the next boot. This will temporarily disable all overclocking, allowing you to boot successfully and then edit your settings.
diff --git a/documentation/asciidoc/computers/config_txt/pi4-hdmi.adoc b/documentation/asciidoc/computers/config_txt/pi4-hdmi.adoc
deleted file mode 100644
index bc908fa31c..0000000000
--- a/documentation/asciidoc/computers/config_txt/pi4-hdmi.adoc
+++ /dev/null
@@ -1,12 +0,0 @@
-== Raspberry Pi 4 HDMI Pipeline
-
-In order to support dual displays, and modes up to 4k60, the Raspberry Pi 4 has updated the HDMI composition pipeline hardware in a number of ways. One of the major changes is that it generates 2 output pixels for every clock cycle.
-
-Every HDMI mode has a list of timings that control all the parameters around sync pulse durations. These are typically defined via a pixel clock, and then a number of active pixels, a front porch, sync pulse, and back porch for each of the horizontal and vertical directions.
-
-Running everything at 2 pixels per clock means that the Raspberry Pi 4 cannot support a timing where _any_ of the horizontal timings are not divisible by 2. The firmware and Linux kernel will filter out any mode that does not fulfill this criteria.
-
-There is only one mode in the CEA and DMT standards that falls into this category - DMT mode 81, which is 1366x768 @ 60Hz. This mode has odd values for the horizontal sync and back porch timings. It's also an unusual mode for having a width that isn't divisible by 8.
-
-If your monitor has this resolution, then the Raspberry Pi 4 will automatically drop down to the next mode that is advertised by the monitor; this is typically 1280x720.
-
diff --git a/documentation/asciidoc/computers/config_txt/video.adoc b/documentation/asciidoc/computers/config_txt/video.adoc
index f66e6cebbd..eac9fba9fc 100644
--- a/documentation/asciidoc/computers/config_txt/video.adoc
+++ b/documentation/asciidoc/computers/config_txt/video.adoc
@@ -1,14 +1,28 @@
-== Video Options
+== Video options
-=== HDMI Mode
+=== HDMI mode
-In order to support dual 4k displays, the Raspberry Pi 4 has xref:config_txt.adoc#raspberry-pi-4-hdmi-pipeline[updated video hardware], which imposes minor restrictions on the modes supported.
+To control HDMI settings, use the xref:configuration.adoc#set-resolution-and-rotation[Screen Configuration utility] or xref:configuration.adoc#set-the-kms-display-mode[KMS video settings] in `cmdline.txt`.
-The HDMI settings used to be configured by firmware via settings in `config.txt`; this configuration is now instead done by KMS via xref:configuration.adoc#hdmi-configuration[settings] in `cmdline.txt`.
+==== HDMI Pipeline for 4-series devices
-=== Composite Video Mode
+In order to support dual displays and modes up to 4Kp60, Raspberry Pi 4, Compute Module 4, and Pi 400 generate 2 output pixels for every clock cycle.
-The table below describes where composite video output can be found on each model of Raspberry Pi computer:
+Every HDMI mode has a list of timings that control all the parameters around sync pulse durations. These are typically defined via a pixel clock, and then a number of active pixels, a front porch, sync pulse, and back porch for each of the horizontal and vertical directions.
+
+Running everything at 2 pixels per clock means that the 4-series devices cannot support a timing where _any_ of the horizontal timings are not divisible by 2. The firmware and Linux kernel filter out any mode that does not fulfil this criteria.
+
+There is only one incompatible mode in the CEA and DMT standards: DMT mode 81, 1366x768 @ 60Hz. This mode has odd-numbered values for the horizontal sync and back porch timings and a width that indivisible by 8.
+
+If your monitor has this resolution, 4-series devices automatically drop down to the next mode advertised by the monitor; typically 1280x720.
+
+==== HDMI Pipeline for 5-series devices
+
+Flagship models since Raspberry Pi 5, Compute Module models since CM5, and Keyboard models since Pi 500 also work at 2 output pixels per clock cycle. These models have special handling for odd timings and can handle these modes directly.
+
+=== Composite video mode
+
+Composite video output can be found on each model of Raspberry Pi computer:
|===
| model | composite output
@@ -19,18 +33,21 @@ The table below describes where composite video output can be found on each mode
| Raspberry Pi Zero
| Unpopulated `TV` header
-| Raspberry Pi Zero 2 W
+| Raspberry Pi Zero 2 W
| Test pads on underside of board
+| Raspberry Pi 5
+| J7 pad next to HDMI socket
+
| All other models
| 3.5mm AV jack
|===
-NOTE: Composite video output is not available on the Raspberry Pi 400.
+NOTE: Composite video output is not available on Keyboard models.
==== `enable_tvout`
-Set to `1` to enable composite video output, or `0` to disable. On Raspberry Pi 4, composite output is only available if you set this to `1`, which also disables HDMI output. Composite output is not available on the Raspberry Pi 400.
+Set to `1` to enable composite video output and `0` to disable. On Flagship models since Raspberry Pi 4, Compute Modules since CM4, and Zero models, composite output is only available if you set this to `1`, which also disables HDMI output. Composite output is not available on Keyboard models.
[%header,cols="1,1"]
@@ -38,47 +55,57 @@ Set to `1` to enable composite video output, or `0` to disable. On Raspberry Pi
|Model
|Default
-|Pi 4 and 400
+|Flagship models since Raspberry Pi 4B, Compute Modules since CM4, Keyboard models
|0
|All other models
|1
|===
-On all models except Raspberry Pi 4, HDMI output needs to be disabled in order for composite output to be enabled. HDMI output is disabled when no HDMI display is connected / detected. Set `enable_tvout=0` to prevent composite being enabled when HDMI is disabled.
+On supported models, you must disable HDMI output to enable composite output. HDMI output is disabled when no HDMI display is detected. Set `enable_tvout=0` to prevent composite being enabled when HDMI is disabled.
-To enable composite output (on all models of Raspberry Pi) you also need to append `,composite` to the end of the `dtoverlay=vc4-kms-v3d` line in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]:
+To enable composite output, append `,composite` to the end of the `dtoverlay=vc4-kms-v3d` line in xref:../computers/config_txt.adoc#what-is-config-txt[`/boot/firmware/config.txt`]:
+[source,ini]
----
dtoverlay=vc4-kms-v3d,composite
----
-By default this will output composite NTSC video. To choose a different mode, you need to append
+By default, this outputs composite NTSC video. To choose a different mode, instead append the following to the single line in `/boot/firmware/cmdline.txt`:
+[source,ini]
----
-vc4.tv_norm=video_mode
+vc4.tv_norm=
----
-to the single line in `/boot/firmware/cmdline.txt`, where `video_mode` is one of `NTSC`, `NTSC-J`, `NTSC-443`, `PAL`, `PAL-M`, `PAL-N`, `PAL60` or `SECAM`.
+Replace the `` placeholder with one of the following values:
-=== LCD Displays and Touchscreens
+* `NTSC`
+* `NTSC-J`
+* `NTSC-443`
+* `PAL`
+* `PAL-M`
+* `PAL-N`
+* `PAL60`
+* `SECAM`
+
+=== LCD displays and touchscreens
==== `ignore_lcd`
-By default the Raspberry Pi Touch Display is used when it is detected on the I2C bus. `ignore_lcd=1` will skip this detection phase, and therefore the LCD display will not be used.
+By default, the Raspberry Pi Touch Display is used when detected on the I2C bus. `ignore_lcd=1` skips this detection phase. This prevents the LCD display from being used.
==== `disable_touchscreen`
-Enable/disable the touchscreen.
+Enables and disables the touchscreen.
-`disable_touchscreen=1` will disable the touchscreen on the official Raspberry Pi Touch Display.
+`disable_touchscreen=1` disables the touchscreen component of the official Raspberry Pi Touch Display.
-=== Generic Display Options
+=== Generic display options
==== `disable_fw_kms_setup`
-By default, the firmware parses the EDID of any HDMI attached display, picks an appropriate video mode, then passes the resolution and frame rate of the mode, along with overscan parameters, to the Linux kernel via settings on the kernel command line. In rare circumstances, this can have the effect of choosing a mode that is not in the EDID, and may be incompatible with the device. You can use `disable_fw_kms_setup=1` to disable the passing of these parameters and avoid this problem. The Linux video mode system (KMS) will then parse the EDID itself and pick an appropriate mode.
-
-NOTE: On Raspberry Pi 5 this parameter defaults to `1`
+By default, the firmware parses the EDID of any HDMI attached display, picks an appropriate video mode, then passes the resolution and frame rate of the mode (and overscan parameters) to the Linux kernel via settings on the kernel command line. In rare circumstances, the firmware can choose a mode not in the EDID that may be incompatible with the device. Use `disable_fw_kms_setup=1` to disable passing video mode parameters, which can avoid this problem. The Linux video mode system (KMS) instead parses the EDID itself and picks an appropriate mode.
+NOTE: On Raspberry Pi 5, this parameter defaults to `1`.
diff --git a/documentation/asciidoc/computers/config_txt/what_is_config_txt.adoc b/documentation/asciidoc/computers/config_txt/what_is_config_txt.adoc
index 5e4424f39a..e8fc1bf108 100644
--- a/documentation/asciidoc/computers/config_txt/what_is_config_txt.adoc
+++ b/documentation/asciidoc/computers/config_txt/what_is_config_txt.adoc
@@ -1,25 +1,28 @@
== What is `config.txt`?
-NOTE: Prior to _Bookworm_, Raspberry Pi OS stored the boot partition at `/boot/`. Since _Bookworm_, the boot partition is located at `/boot/firmware/`.
+Instead of the https://en.wikipedia.org/wiki/BIOS[BIOS] found on a conventional PC, Raspberry Pi devices use a configuration file called `config.txt`. The GPU reads `config.txt` before the Arm CPU and Linux initialise. Raspberry Pi OS looks for this file in the *boot partition*, located at `/boot/firmware/`.
-The Raspberry Pi uses a configuration file instead of the https://en.wikipedia.org/wiki/BIOS[BIOS] you would expect to find on a conventional PC. The system configuration parameters, which would traditionally be edited and stored using a BIOS, are stored instead in an optional text file named `config.txt`. This is read by the GPU before the ARM CPU and Linux are initialised. It must therefore be located on the first (boot) partition of your SD card, alongside `bootcode.bin` and `start.elf`. This file is normally accessible as `/boot/firmware/config.txt` from Linux, and must be edited as the `root` user. From Windows or OS X it is visible as a file in the only accessible part of the card. If you need to apply some of the config settings below, but you don't have a `config.txt` on your boot partition yet, simply create it as a new text file.
+NOTE: Prior to Raspberry Pi OS _Bookworm_, Raspberry Pi OS stored the boot partition at `/boot/`.
-Any changes will only take effect after you have rebooted your Raspberry Pi. After Linux has booted, you can view the current active settings using the following commands:
+You can edit `config.txt` directly from your Raspberry Pi OS installation. You can also remove the storage device and edit files in the boot partition, including `config.txt`, from a separate computer.
-* `vcgencmd get_config `: this displays a specific config value, e.g. `vcgencmd get_config arm_freq`.
-* `vcgencmd get_config int`: this lists all the integer config options that are set (non-zero).
-* `vcgencmd get_config str`: this lists all the string config options that are set (non-null).
+Changes to `config.txt` only take effect after a reboot. You can view the current active settings using the following commands:
-NOTE: There are some config settings that cannot be retrieved using `vcgencmd`.
+`vcgencmd get_config `:: displays a specific config value, e.g. `vcgencmd get_config arm_freq`
+`vcgencmd get_config int`:: lists all non-zero integer config options (non-zero)
+`vcgencmd get_config str`:: lists all non-null string config options
-=== File Format
+NOTE: Not all config settings can be retrieved using `vcgencmd`.
-The `config.txt` file is read by the early-stage boot firmware, so it has a very simple file format. The format is a single `property=value` statement on each line, where `value` is either an integer or a string. Comments may be added, or existing config values may be commented out and disabled, by starting a line with the `#` character.
+=== File format
-There is a 98-character line length limit for entries - any characters past this limit will be ignored.
+The `config.txt` file is read by the early-stage boot firmware, so it uses a very simple file format: **a single `property=value` statement on each line, where `value` is either an integer or a string**. Comments may be added, or existing config values may be commented out and disabled, by starting a line with the `#` character.
+
+There is a 98-character line length limit for entries. Raspberry Pi OS ignores any characters past this limit.
Here is an example file:
+[source,ini]
----
# Enable audio (loads snd_bcm2835)
dtparam=audio=on
@@ -34,7 +37,7 @@ display_auto_detect=1
dtoverlay=vc4-kms-v3d
----
-=== Advanced Features
+=== Advanced features
==== `include`
@@ -45,9 +48,9 @@ For example, adding the line `include extraconfig.txt` to `config.txt` will incl
[NOTE]
====
-*Include directives are not supported by the bootcode.bin or EEPROM bootloaders*.
+The `bootcode.bin` or EEPROM bootloaders do not support the `include` directive.
-Settings which are handled by the bootloader and so which will only take effect if they are specified in `config.txt` (rather than any additional included file) are:
+Settings which are handled by the bootloader will only take effect if they are specified in `config.txt` (rather than any additional included file):
* `bootcode_delay`,
* `gpu_mem`, `gpu_mem_256`, `gpu_mem_512`, `gpu_mem_1024`,
@@ -58,6 +61,6 @@ Settings which are handled by the bootloader and so which will only take effect
====
-==== Conditional Filtering
+==== Conditional filtering
Conditional filters are covered in the xref:config_txt.adoc#conditional-filters[conditionals section].
diff --git a/documentation/asciidoc/computers/configuration.adoc b/documentation/asciidoc/computers/configuration.adoc
index da7ce8e296..17ffa15f5d 100644
--- a/documentation/asciidoc/computers/configuration.adoc
+++ b/documentation/asciidoc/computers/configuration.adoc
@@ -1,37 +1,46 @@
include::configuration/raspi-config.adoc[]
+include::configuration/display-resolution.adoc[]
+
+include::configuration/audio-config.adoc[]
+
include::configuration/configuring-networking.adoc[]
+include::configuration/screensaver.adoc[]
+
+include::configuration/users.adoc[]
+
+include::configuration/external-storage.adoc[]
+
+include::configuration/kernel-command-line-config.adoc[]
+
+include::configuration/localisation.adoc[]
+
+include::configuration/securing-the-raspberry-pi.adoc[]
+
include::configuration/headless.adoc[]
include::configuration/host-wireless-network.adoc[]
include::configuration/use-a-proxy.adoc[]
-include::configuration/hdmi-config.adoc[]
-
-include::configuration/display-resolution.adoc[]
+include::configuration/boot_folder.adoc[]
-include::configuration/audio-config.adoc[]
+include::configuration/led_blink_warnings.adoc[]
-include::configuration/external-storage.adoc[]
+include::configuration/uart.adoc[]
-include::configuration/localisation.adoc[]
+include::configuration/device-tree.adoc[]
include::configuration/pin-configuration.adoc[]
-include::configuration/device-tree.adoc[]
-include::configuration/kernel-command-line-config.adoc[]
-include::configuration/uart.adoc[]
-include::configuration/led_blink_warnings.adoc[]
-include::configuration/securing-the-raspberry-pi.adoc[]
-include::configuration/screensaver.adoc[]
-include::configuration/boot_folder.adoc[]
+
+
diff --git a/documentation/asciidoc/computers/configuration/audio-config.adoc b/documentation/asciidoc/computers/configuration/audio-config.adoc
index 141fc2ef51..e12c032b46 100644
--- a/documentation/asciidoc/computers/configuration/audio-config.adoc
+++ b/documentation/asciidoc/computers/configuration/audio-config.adoc
@@ -1,37 +1,43 @@
-== Audio Configuration
+== Audio
-The Raspberry Pi has up to three audio output modes: HDMI 1 and 2, if present, and a headphone jack. You can switch between these modes at any time.
+Raspberry Pi OS has multiple audio output modes: HDMI 1, the headphone jack (if your device has one), and USB audio.
-NOTE: Audio output over HDMI will provide better sound quality than audio output over the headphone jack.
+By default, Raspberry Pi OS outputs audio to HDMI 1. If no HDMI output is available, Raspberry Pi OS outputs audio to the headphone jack or a connected USB audio device.
-If your HDMI monitor or TV has built-in speakers, the audio can be played over the HDMI cable, but you can switch it to a set of headphones or other speakers plugged into the headphone jack. If your display claims to have speakers, sound is output via HDMI by default; if not, it is output via the headphone jack. This may not be the desired output setup, or the auto-detection is inaccurate, in which case you can manually switch the output.
+=== Change audio output
-=== Changing the Audio Output
+Use the following methods to configure audio output in Raspberry Pi OS:
-There are two ways of setting the audio output; using the desktop volume control, or using the `raspi-config` command line tool.
-
-==== Using the Desktop
-
-Right-clicking the volume icon on the desktop taskbar brings up the audio output selector; this allows you to select between the internal audio outputs. It also allows you to select any external audio devices, such as USB sound cards and Bluetooth audio devices. A tick is shown against the currently selected audio output device -- simply left-click the desired output in the pop-up menu to change this. The volume control and mute operate on the currently selected device.
-
-===== Pro Audio profile
-
-You may see a device profile named "Pro Audio" when viewing an audio device on the system tray. This profile exposes the maximum number of channels across every audio device allowing you greater control over the routing of signals. Unless you have a specific use case in mind for this type of control, we recommend using a different device profile.
+[[pro-audio-profile]]
+[tabs]
+======
+Desktop volume control::
++
+Right-click the volume icon on the system tray to open the **audio output selector**. This interface lets you choose an audio output device. Click an audio output device to switch audio output to that device.
++
+You may see a device profile named **Pro Audio** when viewing an audio device in the audio output selector. This profile exposes the maximum number of channels across every audio device, allowing you greater control over the routing of signals. Unless you require fine-tuned control over audio output, use a different device profile.
++
For more information about the Pro Audio profile, visit https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ#what-is-the-pro-audio-profile[PipeWire's FAQ].
-==== Using raspi-config
-
-Open up xref:configuration.adoc#raspi-config[raspi-config] by entering the following into the command line:
-
+`raspi-config`::
++
+To change your audio output using xref:configuration.adoc#raspi-config[`raspi-config`], run the following command:
++
+[source,console]
----
-sudo raspi-config
+$ sudo raspi-config
----
++
+You should see a configuration screen. Complete the following steps to change your audio output:
++
+. Select `System options` and press `Enter`.
++
+. Select the `Audio` option and press `Enter`.
++
+. Select your required mode and press `Enter` to select that mode.
++
+. Press the right arrow key to exit the options list. Select `Finish` to exit the configuration tool.
+======
-This will open the configuration screen:
-
-Select `System Options` (currently option 1, but yours may be different) and press `Enter`.
-
-Now select the `Audio` option (currently option S2, but yours may be different) and press `Enter`.
-Select your required mode, press `Enter` and press the right arrow key to exit the options list, then select `Finish` to exit the configuration tool.
diff --git a/documentation/asciidoc/computers/configuration/boot_folder.adoc b/documentation/asciidoc/computers/configuration/boot_folder.adoc
index 407ee542c0..309b9c3f63 100644
--- a/documentation/asciidoc/computers/configuration/boot_folder.adoc
+++ b/documentation/asciidoc/computers/configuration/boot_folder.adoc
@@ -1,91 +1,101 @@
-== The `boot` Folder
+== `boot` folder contents
-In a basic xref:os.adoc[Raspberry Pi OS] install, the boot files are stored on the first partition of the SD card, which is formatted with the FAT file system. This means that it can be read on Windows, macOS, and Linux devices.
+Raspberry Pi OS stores boot files on the first partition of the SD card, formatted with the FAT file system.
-When the Raspberry Pi is powered on, it loads various files from the boot partition/folder in order to start up the various processors, then it boots the Linux kernel.
+On startup, each Raspberry Pi loads various files from the boot partition in order to start up the various processors before the Linux kernel boots.
-Once Linux has booted, the boot partition is mounted as `/boot/firmware/`.
+On boot, Linux mounts the boot partition as `/boot/firmware/`.
NOTE: Prior to _Bookworm_, Raspberry Pi OS stored the boot partition at `/boot/`. Since _Bookworm_, the boot partition is located at `/boot/firmware/`.
-=== Boot Folder Contents
+=== `bootcode.bin`
-==== bootcode.bin
+The bootloader, loaded by the SoC on boot. It performs some very basic setup, and then loads one of the `start*.elf` files.
-This is the bootloader, which is loaded by the SoC on boot; it does some very basic setup, and then loads one of the `start*.elf` files. `bootcode.bin` is not used on the Raspberry Pi 4 or Raspberry Pi 5, because it has been replaced by boot code in the xref:raspberry-pi.adoc#raspberry-pi-boot-eeprom[onboard EEPROM].
+The Raspberry Pi 4 and 5 do not use `bootcode.bin`. It has been replaced by boot code in the xref:raspberry-pi.adoc#raspberry-pi-boot-eeprom[onboard EEPROM].
-==== start.elf, start_x.elf, start_db.elf, start_cd.elf, start4.elf, start4x.elf, start4db.elf, start4cd.elf
+=== `start*.elf`
-These are binary blobs (firmware) that are loaded on to the VideoCore GPU in the SoC, which then take over the boot process.
-`start.elf` is the basic firmware, `start_x.elf` also includes additional codecs, `start_db.elf` can be used for debugging purposes and `start_cd.elf` is a cut-down version of the firmware. `start_cd.elf` removes support for hardware blocks such as codecs and 3D as well as having initial framebuffer limitations. The cut-down firmware is automatically used when `gpu_mem=16` is specified in `config.txt`.
+Binary firmware blobs loaded onto the VideoCore GPU in the SoC, which then take over the boot process.
+
+`start.elf`:: the basic firmware.
+`start_x.elf`:: includes additional codecs.
+`start_db.elf`:: used for debugging.
+`start_cd.elf`:: a cut-down version of the firmware that removes support for hardware blocks such as codecs and 3D as well as debug logging support; it also imposes initial frame buffer limitations. The cut-down firmware is automatically used when `gpu_mem=16` is specified in `config.txt`.
`start4.elf`, `start4x.elf`, `start4db.elf` and `start4cd.elf` are equivalent firmware files specific to the Raspberry Pi 4-series (Model 4B, Pi 400, Compute Module 4 and Compute Module 4S).
-More information on how to use these files can be found in xref:config_txt.adoc#boot-options[the `config.txt` section].
+For more information on how to use these files, see the xref:config_txt.adoc#boot-options[`config.txt` documentation].
+
+The Raspberry Pi 5 does not use `elf` files. The firmware is self-contained within the bootloader EEPROM.
+
+=== `fixup*.dat`
+
+Linker files found in matched pairs with the `start*.elf` files listed in the previous section.
-The Raspberry Pi 5 firmware is self-contained withing the bootloader EEPROM and does not load firmware `.elf` files from the boot filesystem.
+=== `cmdline.txt`
-==== fixup*.dat
+The <> passed into the kernel at boot.
-These are linker files and are matched pairs with the `start*.elf` files listed in the previous section.
+=== `config.txt`
-==== cmdline.txt
+Contains many configuration parameters for setting up the Raspberry Pi. For more information, see the xref:config_txt.adoc[`config.txt` documentation].
-The kernel <> passed in to the kernel when it boots.
+IMPORTANT: Raspberry Pi 5 requires a non-empty `config.txt` file in the boot partition.
-==== config.txt
+=== `issue.txt`
-Contains many configuration parameters for setting up the Raspberry Pi. See xref:config_txt.adoc[the `config.txt` section].
+Text-based housekeeping information containing the date and git commit ID of the distribution.
-NOTE: Raspberry Pi 5 requires that the boot partition contains a non-empty `config.txt` file.
+=== `initramfs*`
-==== issue.txt
+Contents of the initial ramdisk. This loads a temporary root file system into memory before the real root file system can be mounted.
-Some text-based housekeeping information containing the date and git commit ID of the distribution.
+Since Bookworm, Raspberry Pi OS includes an `initramfs` file by default. To enable the initial ramdisk, configure it in xref:config_txt.adoc[`config.txt`] with the xref:config_txt.adoc#auto_initramfs[`auto_initramfs`] keyword.
-==== ssh or ssh.txt
+=== `ssh` or `ssh.txt`
-When this file is present, SSH will be enabled on boot. The contents don't matter, it can be empty. SSH is otherwise disabled by default.
+When this file is present, enables SSH at boot. SSH is otherwise disabled by default. The contents do not matter. Even an empty file enables SSH.
-==== Device Tree files
+=== Device Tree blob files (`*.dtb`)
-There are various Device Tree blob files, which have the extension `.dtb`. These contain the hardware definitions of the various models of Raspberry Pi, and are used on boot to set up the kernel xref:configuration.adoc#part3.1[according to which Raspberry Pi model is detected].
+Device tree blob files contain the hardware definitions of the various models of Raspberry Pi. These files set up the kernel at boot xref:configuration.adoc#part3.1[based on the detected Raspberry Pi model].
-==== Kernel Files
+=== Kernel files (`*.img`)
-The boot folder will contain various xref:linux_kernel.adoc#kernel[kernel] image files, used for the different Raspberry Pi models:
+Various xref:linux_kernel.adoc#kernel[kernel] image files that correspond to Raspberry Pi models:
|===
| Filename | Processor | Raspberry Pi model | Notes
-| kernel.img
+| `kernel.img`
| BCM2835
-| Pi Zero, Pi 1
+| Pi Zero, Pi 1, CM1
|
-| kernel7.img
+| `kernel7.img`
| BCM2836, BCM2837
-| Pi Zero 2 W, Pi 2, Pi 3
-| Later Pi 2 uses the BCM2837
+| Pi Zero 2 W, Pi 2, Pi 3, CM3, Pi 3+, CM3+
+| Later revisions of Pi 2 use BCM2837
-| kernel7l.img
+| `kernel7l.img`
| BCM2711
-| Pi 4, Pi 400, CM4, CM4-S
+| Pi 4, CM4, CM4S, Pi 400
| Large Physical Address Extension (LPAE)
-| kernel8.img
+| `kernel8.img`
| BCM2837, BCM2711, BCM2712
-| Pi Zero 2 W, Pi 2, Pi 3, Pi 4, Pi 400, CM4, CM4-S, Pi 5
-| xref:config_txt.adoc#boot-options[64-bit kernel]. Raspberry Pi 2 with BCM2836 does not support 64-bit kernels.
+| Pi Zero 2 W, Pi 2 (later revisions), Pi 3, CM3, Pi 3+, CM3+, Pi 4, CM4, CM4S, Pi 400, CM5, Pi 5, Pi 500
+| xref:config_txt.adoc#boot-options[64-bit kernel]. Earlier revisions of Raspberry Pi 2 (with BCM2836) do not support 64-bit kernels.
-| kernel_2712.img
+| `kernel_2712.img`
| BCM2712
-| Pi 5
-| Pi 5 optmized xref:config_txt.adoc#boot-options[64-bit kernel].
+| Pi 5, CM5, Pi 500
+| Pi 5-optimized xref:config_txt.adoc#boot-options[64-bit kernel].
|===
-NOTE: The architecture reported by `lscpu` is `armv7l` for systems running a 32-bit kernel (i.e. everything except `kernel8.img`), and `aarch64` for systems running a 64-bit kernel. The `l` in the `armv7l` case refers to the architecture being little-endian, not `LPAE` as is indicated by the `l` in the `kernel7l.img` filename.
+NOTE: `lscpu` reports a CPU architecture of `armv7l` for systems running a 32-bit kernel, and `aarch64` for systems running a 64-bit kernel. The `l` in the `armv7l` case refers to little-endian CPU architecture, not `LPAE` as is indicated by the `l` in the `kernel7l.img` filename.
-=== The Overlays Folder
+=== `overlays` folder
-The `overlays` sub-folder contains Device Tree overlays. These are used to configure various hardware devices that may be attached to the system, for example the Raspberry Pi Touch Display or third-party sound boards. These overlays are selected using entries in `config.txt` -- see xref:configuration.adoc#part2['Device Trees, overlays and parameters, part 2' for more info].
+Contains Device Tree overlays. These are used to configure various hardware devices, such as third-party sound boards. Entries in `config.txt` select these overlays. For more information, see xref:configuration.adoc#part2[Device Trees, overlays and parameters].
diff --git a/documentation/asciidoc/computers/configuration/configuring-networking.adoc b/documentation/asciidoc/computers/configuration/configuring-networking.adoc
index 2f1b3e0f89..aea9de8203 100644
--- a/documentation/asciidoc/computers/configuration/configuring-networking.adoc
+++ b/documentation/asciidoc/computers/configuration/configuring-networking.adoc
@@ -1,73 +1,79 @@
-== Configuring Networking
+== Networking
-Raspberry Pi OS provides a Graphical User Interface (GUI) for setting up wireless connections. Users of Raspberry Pi OS Lite and headless machines can set up wireless networking from the command line with https://developer-old.gnome.org/NetworkManager/stable/nmcli.html[`nmcli`].
+Raspberry Pi OS provides a graphical user interface (GUI) for setting up wireless connections. Users of Raspberry Pi OS Lite and headless machines can set up wireless networking from the command line with https://networkmanager.dev/docs/api/latest/nmcli.html[`nmcli`].
-NOTE: Network Manager is the default networking configuration tool under Raspberry Pi OS _Bookworm_ or later. While Network Manager can be installed on earlier versions of the operating system using `apt` and configured as the default using `raspi-config`, earlier versions used `dhcpd` and other tools for network configuration by default.
+NOTE: Starting with Raspberry Pi OS _Bookworm_, Network Manager is the default networking configuration tool. Earlier versions of Raspberry Pi OS used `dhcpd` and other tools for network configuration.
-=== Using the Desktop
+=== Connect to a wireless network
-Access the Network Manager via the network icon at the right-hand end of the menu bar. If you are using a Raspberry Pi with built-in wireless connectivity, or if a wireless dongle is plugged in, click this icon to bring up a list of available wireless networks. If you see the message 'No APs found - scanning...', wait a few seconds, and the Network Manager should find your network.
+==== via the desktop
-NOTE: Raspberry Pi devices that support dual-band wireless (Raspberry Pi 3B+, Raspberry Pi 4, Compute Module 4, and Raspberry Pi 400) automatically disable networking until a you assign a wireless LAN country. To set a wireless LAN country, open the Raspberry Pi Configuration application from the Preferences Menu, select *Localisation* and select your country from the menu.
+Access Network Manager via the network icon at the right-hand end of the menu bar. If you are using a Raspberry Pi with built-in wireless connectivity, or if a wireless dongle is plugged in, click this icon to bring up a list of available wireless networks. If you see the message 'No APs found - scanning...', wait a few seconds, and Network Manager should find your network.
+
+NOTE: Devices with dual-band wireless automatically disable networking until you assign a wireless LAN country. Flagship models since Raspberry Pi 3B+, Compute Modules since CM4, and Keyboard models support dual-band wireless. To set a wireless LAN country, open the Raspberry Pi Configuration application from the Preferences menu, select *Localisation* and select your country from the menu.
image::images/wifi2.png[wifi2]
-The icons on the right show whether a network is secured or not and give an indication of signal strength. Click the network that you want to connect to. If the network is secured, a dialogue box will prompt you to enter the network key:
+The icons on the right show whether a network is secured or not, and give an indication of signal strength. Click the network that you want to connect to. If the network is secured, a dialogue box will prompt you to enter the network key:
image::images/key.png[key]
Enter the key and click *OK*, then wait a couple of seconds. The network icon will flash briefly to show that a connection is being made. When connected, the icon will stop flashing and show the signal strength.
-==== Connect to a Hidden Network
+===== Connect to a hidden network
-If you want to use a hidden network, use the *Advanced Options* > *Connect to a Hidden Wi-Fi Network* in the network menu:
+To use a hidden network, navigate to *Advanced options* > *Connect to a hidden Wi-Fi network* in the network menu:
image::images/network-hidden.png[the connect to a hidden wi-fi network option in advanced options]
-Then, enter the SSID for the hidden network. Ask your network administrator which type of security your network uses; while most home networks currently use WPA & WPA2 Personal security, public networks sometimes use WPA & WPA2 Enterprise security. Select the security type for your network, and enter your credentials:
+Then, enter the SSID for the hidden network. Ask your network administrator which type of security your network uses; while most home networks currently use WPA and WPA2 personal security, public networks sometimes use WPA and WPA2 enterprise security. Select the security type for your network, and enter your credentials:
image::images/network-hidden-authentication.png[hidden wi-fi network authentication]
Click the *Connect* button to initiate the network connection.
[[wireless-networking-command-line]]
-=== Using the Command Line
+==== via the command line
-This guide will help you configure a wireless connection on your Raspberry Pi entirely from a terminal without using graphical tools. No additional software is required; Raspberry Pi OS comes preconfigured with everything you need.
+This guide will help you configure a wireless connection on your Raspberry Pi from a terminal without using graphical tools. No additional software is required.
NOTE: This guide should work for WEP, WPA, WPA2, or WPA3 networks, but may not work for enterprise networks.
-==== Enable Wireless Networking
+===== Enable wireless networking
-On a fresh install, you must specify the country where you use your device.
-This allows your device to choose the correct frequency bands for 5GHz networking.
-Once you have specified a wireless LAN country, you can use your Raspberry Pi's built-in wireless networking module.
+On a fresh install, you must specify the country where you use your device. This allows your device to choose the correct frequency bands for 5GHz networking. Once you have specified a wireless LAN country, you can use your Raspberry Pi's built-in wireless networking module.
To do this, set your wireless LAN country with the command line `raspi-config` tool. Run the following command:
+
+[source,console]
----
-sudo raspi-config
+$ sudo raspi-config
----
-Select the *Localisation Options* menu item using the arrow keys. Choose the *WLAN Country* option.
+
+Select the *Localisation options* menu item using the arrow keys. Choose the *WLAN country* option.
Pick your country from the dropdown using the arrow keys. Press `Enter` to select your country.
-You should now have access to wireless networking. Run the following command to check if your wifi radio is enabled:
+You should now have access to wireless networking. Run the following command to check if your Wi-Fi radio is enabled:
+[source,console]
----
-nmcli radio wifi
+$ nmcli radio wifi
----
-If this command returns the text "enabled", you're ready to configure a connection. If this command returns "disabled", try enabling WiFi with the following command:
+If this command returns the text "enabled", you're ready to configure a connection. If this command returns "disabled", try enabling Wi-Fi with the following command:
+[source,console]
----
-nmcli radio wifi on
+$ nmcli radio wifi on
----
-==== Find Networks
+===== Find networks
To scan for wireless networks, run the following command:
+[source,console]
----
-nmcli dev wifi list
+$ nmcli dev wifi list
----
You should see output similar to the following:
@@ -82,30 +88,26 @@ IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS
Look in the "SSID" column for the name of the network you would like to connect to. Use the SSID and a password to connect to the network.
-==== Connect to a Network
+===== Connect to a network
-Run the following command to configure a network connection:
+Run the following command to configure a network connection, replacing the `` placeholder with the name of the network you're trying to configure:
+[source,console]
----
-sudo nmcli --ask dev wifi connect
+$ sudo nmcli --ask dev wifi connect
----
-Don't forget to replace `` with the name of the network you're trying to configure.
-
Enter your network password when prompted.
-Your Raspberry Pi should automatically connect to the network once you enter your password. If you see the following output:
-
-----
-Error: Connection activation failed: Secrets were required, but not provided.
-----
+Your Raspberry Pi should automatically connect to the network once you enter your password.
-This means that you entered an incorrect password. If you see this error, run the above command again, being careful to enter your password correctly.
+If you see error output that claims that "Secrets were required, but not provided", you entered an incorrect password. Run the above command again, carefully entering your password.
To check if you're connected to a network, run the following command:
+[source,console]
----
-nmcli dev wifi list
+$ nmcli dev wifi list
----
You should see output similar to the following:
@@ -122,30 +124,33 @@ Check for an asterisk (`*`) in the "IN-USE" column; it should appear in the same
NOTE: You can manually edit your connection configurations in the `/etc/NetworkManager/system-connections/` directory.
-==== Connect to an Unsecured Network
+===== Connect to an unsecured network
If the network you are connecting to does not use a password, run the following command:
+[source,console]
----
-sudo nmcli dev wifi connect
+$ sudo nmcli dev wifi connect
----
-WARNING: Be careful when using unsecured wireless networks.
+WARNING: Unsecured wireless networks can put your personal information at risk. Whenever possible, use a secured wireless network or VPN.
-==== Connect to a Hidden Network
+===== Connect to a hidden network
If you are using a hidden network, specify the "hidden" option with a value of "yes" when you run `nmcli`:
+[source,console]
----
-sudo nmcli --ask dev wifi connect hidden yes
+$ sudo nmcli --ask dev wifi connect